Woltex Infrastructure: Cloudflare Workers + Convex
How Woltex deploys to the edge with Cloudflare Workers and uses Convex for backend and real-time data. A follow-up to the CI/CD pipeline post.
The Deployment Scale
"Should I use Vercel or Cloudflare Workers?" is the wrong question. The real one: how much control do you actually want?

| Category | Examples | Control | Complexity |
|---|---|---|---|
| AI Builders | Lovable, Bolt, Replit, v0 | Lowest | Just describe what you want |
| Platform Managed | Vercel, Netlify, CF Pages, Render | Low | Git push and done |
| Edge Runtime | CF Workers, Fly.io, Railway, Deno Deploy | Medium | More config, more power |
| Containers | ECS, Cloud Run, App Runner | High | Docker, orchestration |
| Self-Managed | k8s, VPS, Bare Metal | Highest | You own everything |
The right choice depends on: product stage, team size, budget, and honestly - how much you enjoy infra work.
Woltex sits in the edge runtime sweet spot: more control than Vercel, less ops than containers.
What is Edge Runtime?
Traditional deployment looks like this: your app runs on a server somewhere (let's say us-east-1). Every request from Tokyo, São Paulo, or Berlin travels across the ocean to that single location, processes, then travels back. Latency adds up.
Edge runtime flips this model. Your code runs in 200+ locations simultaneously. When a user in Tokyo makes a request, it's handled by a server in Tokyo. Berlin hits Berlin. São Paulo hits São Paulo.
| Architecture | Locations | Latency | Ops Effort |
|---|---|---|---|
| Single VPS | 1 region | High for distant users | Low |
| Multi-region VMs | 3-5 regions | Better, but gaps | High (sync state, deploy everywhere) |
| Edge Runtime | 200+ locations | ~50ms everywhere | Low (platform handles distribution) |
The tradeoff: edge functions have constraints (execution time limits, no persistent connections). But for SSR and API routes, they're perfect.
Where Do Apps Actually Deploy?
Following up on the CI/CD pipeline post, this answers the question: where are applications actually being deployed?
The short answer: Cloudflare Workers for the frontend and server-side rendering, Convex for backend logic and database.

The Stack
| Layer | Technology | What It Does |
|---|---|---|
| Frontend Hosting | Cloudflare Workers | Serverless edge functions at 200+ locations |
| Backend + Database | Convex | Real-time BaaS with global edge network |
| Build Tool | Vite + TanStack Start | Bundles frontend, handles SSR |
| Deployment | Wrangler | Cloudflare's CLI for deploying Workers |
Cloudflare Workers
Every Woltex app runs on Cloudflare Workers - serverless functions that execute at the edge, close to users worldwide.
Apps Deployed
| Worker Name | Domain | Purpose |
|---|---|---|
woltexai | woltex.ai | Waitlist + landing page |
woltexai-blog | blog.woltex.ai | This blog |
woltex-recall | recall.woltex.ai | Memory game |
| + others | — | More as Woltex grows |
Each app is an independent Worker. They deploy separately based on which files changed - no monolith deploys here.
Build Flow
Vite Bundles the Frontend
Vite compiles TypeScript/React into optimized bundles:
pnpm --filter waitlist buildTanStack Start Handles SSR
TanStack Start generates a server handler that runs on the edge, enabling server-side rendering without a traditional Node.js server.
Wrangler Deploys to Cloudflare
The Wrangler CLI packages everything and deploys to Cloudflare's edge network:
npx wrangler deployConvex BaaS
Convex handles the backend - database, server functions, and real-time sync.
Why Convex?
| Feature | Benefit |
|---|---|
| Real-time sync | UI updates automatically when data changes |
| Global edge network | Low latency queries worldwide |
| TypeScript end-to-end | Type-safe from database to frontend |
| No infrastructure to manage | They handle scaling, backups, etc. |
Convex can be self-hosted, but their SaaS is optimized for edge performance and removes the ops burden.
Deployment
Convex deploys happen alongside app deployments when backend code changes:
deploy-convex:
if: needs.detect-changes.outputs.convex == 'true'
steps:
- name: Deploy to Convex (production)
if: github.ref == 'refs/heads/main'
run: pnpm --filter @woltex/convex run deploy
env:
CONVEX_DEPLOY_KEY: ${{ secrets.CONVEX_DEPLOY_KEY }}Azure VM (n8n + Monitoring)
Not everything fits the edge model. Long-running workflows, persistent services, and self-hosted analytics need a traditional VM.
| Service | Purpose |
|---|---|
| n8n | Workflow automation with queue workers |
| Prometheus + Grafana | Metrics and monitoring dashboards |
| Rybbit | Self-hosted, privacy-friendly analytics |
All services run on a single Azure VM with zero open ports. Access is secured via Cloudflare Tunnel - no public IPs exposed.
Why a VM for these?
- n8n needs persistent connections, queue workers, and runs workflows that can take minutes
- Analytics (Rybbit) requires a database and shouldn't run on edge functions
- Monitoring needs to store time-series data and run 24/7
For a full walkthrough of the n8n setup, see Production n8n: Queue Workers, Metrics & Monitoring.
Full Deployment Flow
From local development to production:

Push to develop
Work happens on feature branches, merged to develop for preview deployments.
PR Checks Run
Lint, type check, security scans - see the CI/CD post for details.
Preview Deployment
Merging to develop triggers preview deployments:
woltexai-preview→ preview.woltex.aiwoltexai-blog-preview→ preview.blog.woltex.ai
Production Deployment
Merging develop → main deploys to production. Live in seconds.
Why This Stack?
Pros
| Advantage | Details |
|---|---|
| Global edge | Low latency everywhere - no regional servers to manage |
| Auto-scaling | No capacity planning, Workers scale to millions of requests |
| Cost-effective | $0 at small scale, cheap at large scale |
| No containers/k8s | Skip the infrastructure complexity |
| Fast deployments | git push → live in seconds |
| More control than Vercel | Direct access to Workers, custom routing |
Tradeoffs
| Limitation | Reality |
|---|---|
| Learning curve | Steeper than Vercel's "just deploy" experience |
| Less infra control | Not as customizable as running your own k8s |
| Cold starts | They exist, but minimal for Workers |
| Long-running processes | Workers have execution time limits (30s-900s depending on plan) |
If you need long-running background jobs, consider Convex's scheduled functions or a separate queue service.
