Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Desarrollo de Software

Rust for Backend: axum and tokio in Real Services

Rust for Backend: axum and tokio in Real Services

Actualizado: 2026-05-03

For years Rust was positioned as a language for systems programming — operating systems, drivers, database engines. That perception is now outdated. With tokio[1] as the async runtime and axum[2] as the HTTP framework, writing web services in Rust is reasonably comfortable, and the performance difference vs Go or Node.js is real.

Key takeaways

  • The tokio + axum + serde + sqlx stack is the dominant combination for backend in Rust: performance, type safety, and reasonable ergonomics once past the initial curve.
  • The learning curve is real: ownership, lifetimes, and async in Rust take months to internalise.
  • Rust wins clearly for gateways, proxies, and event processors; for typical CRUD, Go or Node.js are more productive.
  • Tools like anyhow, tracing, mold, and cargo nextest significantly reduce development friction.
  • Slow compilation is the biggest operational drawback: a medium project compiles in 30-60s in debug mode.

The Current Stack

When people talk about “backend in Rust”, they almost always mean this combination:

  • tokio: the dominant async runtime. Handles a high number of concurrent tasks with a small OS thread pool using a work-stealing scheduler.
  • axum: HTTP framework built on tower[3] and hyper[4]. Maintained by the tokio team.
  • serde: the de facto standard for serialising and deserialising JSON, YAML, TOML, and other formats.
  • sqlx or diesel: for database access. sqlx, with compile-time SQL query verification, is the option growing fastest.

Combined, this stack lets you build a JSON microservice over Postgres in similar code volume to a Go equivalent, with better type safety and more informative compile errors.

A Minimal axum Endpoint

rust
use axum::{routing::get, Router, Json};
use serde::Serialize;

#[derive(Serialize)]
struct Health { status: &'static str }

async fn health() -> Json<Health> {
    Json(Health { status: "ok" })
}

#[tokio::main]
async fn main() {
    let app = Router::new().route("/health", get(health));
    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

Three key concepts in this example:

  • #[tokio::main] boots the async runtime so main can be async.
  • Handlers are normal async functions — axum handles the binding between types and HTTP responses.
  • Json<T> is an extractor: as a return type, axum automatically serialises to JSON with the correct header.

Why the Learning Curve Matters

Don’t downplay it: Rust has a real learning curve. For someone coming from Go or TypeScript, the first months are hard. Main friction points:

  • Ownership and lifetimes. Rust doesn’t garbage-collect — the compiler validates each reference is valid. New concepts for most developers.
  • Async in Rust. Futures are lazy and depend on the runtime. Compile errors with Pin<Box<dyn Future...>> confuse at first.
  • Explicit errors. No exceptions: every fallible function returns Result<T, E> and propagates with ?. Requires a mindset shift.
  • Slow compilation. A medium project compiles in 30-60 seconds in debug and several minutes in release. Slows the iteration cycle.

Accept 2-3 months of reduced productivity to get through this. Teams without that patience usually drop out before seeing the benefits.

Rust logo, the language whose ownership architecture and async runtime (tokio) enable building high-performance backend services without GC

Pragmatic Comparison

Aspect Rust + axum Go + net/http Node + Express
Low p99 latency Excellent Very good Acceptable
Memory usage Very low Low Medium-high
Learning curve Steep Smooth Easy
Compile speed Slow Fast N/A (interpreted)
HTTP ecosystem Mature, growing Very mature Massive
CRUD productivity Medium High Very high

Rust wins where per-node performance really matters: services under sustained load, Kafka consumption at high rates, gateways multiplexing thousands of connections. In typical CRUD services where the bottleneck is the database, the gain is marginal and the learning cost doesn’t pay back.

Cases Where It Pays Off

Four service types where Rust is adopted successfully in production:

  • Gateways and proxies. Handling 50,000 simultaneous connections with predictable memory.
  • Event processors. Kafka consumers with high throughput where Go was already showing GC pressure.
  • CPU-bound services. Image processing, voluminous log parsing, complex transformations.
  • Components embedded in other languages. Compile to a static library consumed from Python or Node.js via FFI.

For the typical “validate JSON, run 3 Postgres queries, return response” microservice, Go or Node.js remain more productive options.

Patterns That Save Suffering

Five tips for teams adopting Rust on the backend:

  • anyhow for application errors, thiserror for library errors. Makes error propagation reasonable without boilerplate.
  • tracing for structured logs. Works well with OpenTelemetry and integrates with axum via middleware.
  • tower middleware. Uniform composition for reusable logging, auth, and rate limiting.
  • Incremental build with mold linker. Significantly cuts iteration time on Linux.
  • cargo nextest. A much faster test runner than cargo test by default.

For the cache layer over Postgres in axum services, Redis with cache-aside strategies is the most common combination. The Grafana observability stack integrates well with tracing via OTLP. Rust service containers are easier to operate with Podman in rootless environments.

Conclusion

Rust on the backend is already a serious option, not a future promise. The tokio + axum combination is stable, performant, and reasonably productive once past the initial curve. But it’s not the right answer for every service — it’s a choice with clear trade-offs to make consciously, based on real workloads and not hype.

Was this useful?
[Total: 10 · Average: 4.6]
  1. tokio
  2. axum
  3. tower
  4. hyper

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.