Woltex

Woltex Infrastructure: Cloudflare Workers + Convex

How Woltex deploys to the edge with Cloudflare Workers and uses Convex for backend and real-time data. A follow-up to the CI/CD pipeline post.

The Deployment Scale

"Should I use Vercel or Cloudflare Workers?" is the wrong question. The real one: how much control do you actually want?

The deployment scale - from AI builders to self-managed

CategoryExamplesControlComplexity
AI BuildersLovable, Bolt, Replit, v0LowestJust describe what you want
Platform ManagedVercel, Netlify, CF Pages, RenderLowGit push and done
Edge RuntimeCF Workers, Fly.io, Railway, Deno DeployMediumMore config, more power
ContainersECS, Cloud Run, App RunnerHighDocker, orchestration
Self-Managedk8s, VPS, Bare MetalHighestYou own everything

The right choice depends on: product stage, team size, budget, and honestly - how much you enjoy infra work.

Woltex sits in the edge runtime sweet spot: more control than Vercel, less ops than containers.

What is Edge Runtime?

Traditional deployment looks like this: your app runs on a server somewhere (let's say us-east-1). Every request from Tokyo, São Paulo, or Berlin travels across the ocean to that single location, processes, then travels back. Latency adds up.

Edge runtime flips this model. Your code runs in 200+ locations simultaneously. When a user in Tokyo makes a request, it's handled by a server in Tokyo. Berlin hits Berlin. São Paulo hits São Paulo.

ArchitectureLocationsLatencyOps Effort
Single VPS1 regionHigh for distant usersLow
Multi-region VMs3-5 regionsBetter, but gapsHigh (sync state, deploy everywhere)
Edge Runtime200+ locations~50ms everywhereLow (platform handles distribution)

The tradeoff: edge functions have constraints (execution time limits, no persistent connections). But for SSR and API routes, they're perfect.


Where Do Apps Actually Deploy?

Following up on the CI/CD pipeline post, this answers the question: where are applications actually being deployed?

The short answer: Cloudflare Workers for the frontend and server-side rendering, Convex for backend logic and database.

Woltex Infrastructure Overview


The Stack

LayerTechnologyWhat It Does
Frontend HostingCloudflare WorkersServerless edge functions at 200+ locations
Backend + DatabaseConvexReal-time BaaS with global edge network
Build ToolVite + TanStack StartBundles frontend, handles SSR
DeploymentWranglerCloudflare's CLI for deploying Workers

Cloudflare Workers

Every Woltex app runs on Cloudflare Workers - serverless functions that execute at the edge, close to users worldwide.

Apps Deployed

Worker NameDomainPurpose
woltexaiwoltex.aiWaitlist + landing page
woltexai-blogblog.woltex.aiThis blog
woltex-recallrecall.woltex.aiMemory game
+ othersMore as Woltex grows

Each app is an independent Worker. They deploy separately based on which files changed - no monolith deploys here.

Build Flow

Vite Bundles the Frontend

Vite compiles TypeScript/React into optimized bundles:

pnpm --filter waitlist build

TanStack Start Handles SSR

TanStack Start generates a server handler that runs on the edge, enabling server-side rendering without a traditional Node.js server.

Wrangler Deploys to Cloudflare

The Wrangler CLI packages everything and deploys to Cloudflare's edge network:

npx wrangler deploy

Convex BaaS

Convex handles the backend - database, server functions, and real-time sync.

Why Convex?

FeatureBenefit
Real-time syncUI updates automatically when data changes
Global edge networkLow latency queries worldwide
TypeScript end-to-endType-safe from database to frontend
No infrastructure to manageThey handle scaling, backups, etc.

Convex can be self-hosted, but their SaaS is optimized for edge performance and removes the ops burden.

Deployment

Convex deploys happen alongside app deployments when backend code changes:

.github/workflows/deployment.yml
deploy-convex:
  if: needs.detect-changes.outputs.convex == 'true'
  steps:
    - name: Deploy to Convex (production)
      if: github.ref == 'refs/heads/main'
      run: pnpm --filter @woltex/convex run deploy
      env:
        CONVEX_DEPLOY_KEY: ${{ secrets.CONVEX_DEPLOY_KEY }}

Azure VM (n8n + Monitoring)

Not everything fits the edge model. Long-running workflows, persistent services, and self-hosted analytics need a traditional VM.

ServicePurpose
n8nWorkflow automation with queue workers
Prometheus + GrafanaMetrics and monitoring dashboards
RybbitSelf-hosted, privacy-friendly analytics

All services run on a single Azure VM with zero open ports. Access is secured via Cloudflare Tunnel - no public IPs exposed.

Why a VM for these?

  • n8n needs persistent connections, queue workers, and runs workflows that can take minutes
  • Analytics (Rybbit) requires a database and shouldn't run on edge functions
  • Monitoring needs to store time-series data and run 24/7

For a full walkthrough of the n8n setup, see Production n8n: Queue Workers, Metrics & Monitoring.


Full Deployment Flow

From local development to production:

Full deployment flow from local to cloud

Push to develop

Work happens on feature branches, merged to develop for preview deployments.

PR Checks Run

Lint, type check, security scans - see the CI/CD post for details.

Preview Deployment

Merging to develop triggers preview deployments:

  • woltexai-preview → preview.woltex.ai
  • woltexai-blog-preview → preview.blog.woltex.ai

Production Deployment

Merging developmain deploys to production. Live in seconds.


Why This Stack?

Pros

AdvantageDetails
Global edgeLow latency everywhere - no regional servers to manage
Auto-scalingNo capacity planning, Workers scale to millions of requests
Cost-effective$0 at small scale, cheap at large scale
No containers/k8sSkip the infrastructure complexity
Fast deploymentsgit push → live in seconds
More control than VercelDirect access to Workers, custom routing

Tradeoffs

LimitationReality
Learning curveSteeper than Vercel's "just deploy" experience
Less infra controlNot as customizable as running your own k8s
Cold startsThey exist, but minimal for Workers
Long-running processesWorkers have execution time limits (30s-900s depending on plan)

If you need long-running background jobs, consider Convex's scheduled functions or a separate queue service.


Next Steps

On this page