WASI preview 3: threads and async in WebAssembly
Actualizado: 2026-05-03
Over the past few years, WebAssembly outside the browser has grown slowly but with direction. The big bottleneck has always been the same: the concurrency model. Preview 1 had nothing resembling threads or async; preview 2 opened the door to the component model but left concurrency as unfinished business. Preview 3, which has been in final standardization for months, is finally the missing piece.
This post isn’t meant to be an exhaustive API tutorial, but a mental map: what problem preview 3 solves, how it addresses it, and what implications it has for languages and platforms adopting it.
Key takeaways
- WASI preview 3 introduces structured concurrency with composable primitives (futures, streams, tasks) rather than OS-level low-level primitives.
- The primitives belong to the component model, not the runtime: concurrency can cross language boundaries cleanly.
- The most important impact is real portability across Wasm serverless platforms (Fastly Compute, Fermyon Spin, Cloudflare Workers).
- Exact scheduling behavior remains each runtime’s responsibility; performance can vary between Wasmtime and Wasmer.
- Language adoption progresses unevenly: Rust is on track; Go and Python will take longer.
The problem we were dragging
Before preview 3, running concurrent WebAssembly code outside the browser required external hacks. Runtimes (Wasmtime, WasmEdge) invented their own extensions to offer multi-threading, and each language compiled to Wasm had to make do. Rust compiled to wasm32-unknown-unknown without thread support; if you wanted threads, you needed specific runtimes and manually orchestrated shared-memory techniques.
The result was a platform that sold portability but didn’t deliver when your application needed to do two things at once. For any serious workload (an HTTP server, a stream processor, a pipeline with natural concurrency) you had to choose between sticking to synchronous code or jumping outside the standard.
The model preview 3 proposes
Preview 3 introduces what the standard calls structured concurrency, and it’s more interesting than “threads and async.” Instead of offering OS-level low-level primitives (mutex, conditions, locks), preview 3 lifts the model to composable primitives:
- Futures: an operation that hasn’t finished yet but promises to produce a value.
- Streams: a sequence of values arriving over time, not all at once.
- Tasks: the unit of concurrent execution managed by the scheduler.
With those three primitives you can build practically any known concurrency pattern, from an async HTTP call to a streaming processing pipeline.
What’s relevant is that these primitives are part of the component model, not the runtime. That means a language compiling to Wasm can offer its own async/await syntax and translate it to component-model futures in the final binary. When that binary interacts with another component — potentially in another language — concurrency crosses the boundary cleanly.
Why it matters
The real value of preview 3 isn’t that Wasm can do async, but that it can do so in a standard way across language boundaries. Practical consequences:
- An HTTP server written in Rust can call, within the same binary, a Wasm module written in Go that processes streams. The call can be asynchronous and data flow can be streaming. Before, you were either trapped in the same implementation (all Rust) or you made the boundary synchronous (which kills performance for intensive workloads).
- Platforms running WebAssembly in production (Fastly Compute, Fermyon Spin) can now support async patterns natively, without platform-specific hacks. A big step toward real portability of Wasm serverless apps across providers.
- Language-specific SDKs will converge. Today, writing a Lambda in Rust implies an SDK exposing its own async primitives; in Go, different ones. With preview 3, the standard pushes all SDKs to speak the same model, reducing fragmentation.
What’s still unresolved
Despite the progress, preview 3 leaves some things pending:
- Runtime variability: mapping primitives to each runtime’s native threading model remains the runtime’s responsibility. Exact behavior — how many real threads are created, how tasks are scheduled — can vary between Wasmtime, Wasmer, and others.
- Host-native interop: functions exposed from the runtime to the Wasm component need more work. Edge cases like cancellation during an async operation are still being polished.
- Language adoption: Rust is on a good path with experimental preview 3 branches. Go, Python, and C++ are behind, with variable states.
What this means for developers
If you work on an app already using WebAssembly outside the browser, preview 3 will simplify code that today depends on runtime-specific hacks. If you’re starting to explore Wasm as a container alternative for serverless functions, preview 3 is the version to take it seriously.
Preview 3 closes the “yes, but…” chapter: yes, Wasm works, yes, you can use it seriously, yes, you can do decent concurrency. My intuition is that the next months will define whether WebAssembly becomes a mainstream serverless platform, and preview 3 is probably the piece that tips the balance.