Cloudflare Workers is the most mature edge serverless platform in 2023. It runs your JavaScript or WebAssembly code in 300+ global datacenters, without your having to manage regions, containers, or perceptible cold starts. The architecture differs fundamentally from Lambda and similar — and that difference defines where it shines and where it falls short.
The Architecture: V8 Isolates Instead of Containers
AWS Lambda and similar launch a container (or microVM like Firecracker) for each function. They have measurable cold starts — from 100ms to several seconds.
Cloudflare Workers use V8 isolates, the same technology isolating tabs in Chrome. Each Worker runs in an isolate inside a shared Node.js process. Consequences:
- 0-5 ms cold start. Creating an isolate is much cheaper than a container.
- Very low memory footprint. Hundreds of Workers can coexist in the same process.
- Very low cost per invocation. Cloudflare’s free tier is generous precisely because each invocation costs very little in infrastructure.
- Limitations: you can’t use OS APIs, native threads, or npm modules depending on native bindings.
For cases where the model fits, this is transformative. For cases where it doesn’t, it’s blocking.
What You Can Run
Workers accepts:
- Modern JavaScript ES Modules.
- TypeScript (compiled to JS before deploy).
- WebAssembly: Rust, Go (TinyGo), C, AssemblyScript compiled to Wasm. Ideal for CPU-intensive logic where JS would be slow.
What it does not accept:
- Node.js modules depending on native C bindings (sharp, native bcrypt, etc.).
- Local filesystem access.
- Process spawning.
- Listening on arbitrary TCP sockets (except permitted outbound).
Storage: KV, Durable Objects, R2, D1
Workers isn’t just compute — Cloudflare has built an edge storage stack:
Workers KV
Globally distributed key-value store, eventually consistent.
- Reads: very fast and cached in each datacenter.
- Writes: propagation to all nodes in seconds to a minute.
- Use case: configuration, feature flags, response cache, frequent reads.
- Limitations: write latency, not suitable for transactional data.
Durable Objects
Objects with consistent state and globally unique identity. Each object lives in one datacenter (often the closest to first use) and processes all requests for that instance sequentially.
- Use case: user sessions, chat rooms, precise counters, distributed coordination.
- Mental model: an actor with state at a fixed location but globally accessible.
R2
S3-compatible object storage, without egress fees.
- Use case: assets, videos, backups, large content.
- Key advantage: price without penalty for serving data to the Internet (vs S3 with expensive egress).
D1
Distributed SQLite database (beta in 2023, GA soon).
- Use case: relational data with SQL queries for edge apps.
- Model: read from local replicas, writes go to a primary region.
This combination covers most edge-application needs without having to call a traditional origin.
Cases Where Workers Shines
Where the architecture fits perfectly:
- Edge logic on frontend. Smart routing, A/B testing, geo-based redirects, bot detection.
- Custom API gateway. Validate tokens, rate limit, transform requests before forwarding to origin.
- Simple APIs without traditional backend. Light CRUD with KV or D1 without deploying servers.
- Streaming and response processing. Pass the origin response through Workers to apply transformations.
- High-throughput webhooks. Receive many requests per second at minimal cost.
- Smart proxies. Workers in front of a traditional origin, adding cache, auth, telemetry.
Cases Where It’s Not the Tool
Honesty: Workers doesn’t fit everything:
- APIs with heavy npm dependencies. Workers doesn’t support large parts of the native Node ecosystem.
- Long CPU-intensive workloads. There are CPU-time limits per request (ms-seconds).
- Long-running tasks. Workers aren’t containers — designed to respond fast.
- Strong transactional data. KV is eventually consistent; D1 is maturing; ACID guarantees are weaker than a traditional Postgres.
- Complex stateful workloads. Durable Objects help but don’t solve everything.
Comparison With Lambda and Similar
| Aspect | Workers | AWS Lambda | Vercel/Netlify |
|---|---|---|---|
| Cold start | ~0-5 ms | 100ms-2s | Similar to Lambda |
| Global distribution | 300+ datacenters | Per AWS region | Edge in some cases |
| Languages | JS/TS/Wasm | Almost all | Mostly JS/TS |
| Edge storage built-in | Yes | No (regional DynamoDB) | Limited |
| Cost per request | Very low | Low, scales with use | Variable |
| Full Node.js support | Limited | Complete | Complete |
For JavaScript edge logic, Workers is the most solid option. For complex workloads with broad dependencies, Lambda remains more flexible.
Development Workflow
Cloudflare has invested in developer experience:
wranglerCLI. Create, develop, and deploy Workers from local.wrangler dev. Local development with hot reload, simulates KV/Durable Objects.- Deploy in seconds. No complex build, deploy is practically instant.
- Built-in logging.
wrangler tailshows live logs.
The iteration loop is very fast compared to deploying Lambda + API Gateway + IAM + CloudFront.
Conclusion
Cloudflare Workers has consolidated edge serverless as a mature category in 2023. For cases fitting the model (JavaScript/Wasm, short logic, edge storage), it’s probably the best available option — faster, cheaper, more distributed than alternatives. For cases that don’t fit (heavy workloads, native Node dependencies), don’t force it — use Lambda or a traditional server. Knowing both options expands your repertoire.
Follow us on jacar.es for more on edge computing, serverless, and modern distributed architectures.