Rust for Backend: axum and tokio in Real Services

Pantalla con código Rust y compilador en terminal

For years Rust was positioned as a language for “systems programming” — operating systems, drivers, database engines. In 2023 that perception is outdated. With tokio as the async runtime and axum as the HTTP framework, writing web services in Rust is reasonably comfortable, and the performance difference vs Go or Node is real.

The Current Stack

When people talk about “backend in Rust” in 2023, they almost always mean this combination:

  • tokio: the dominant async runtime. Handles a high number of concurrent tasks with a small OS thread pool using a work-stealing scheduler.
  • axum: HTTP framework built on tower and hyper. Maintained by the tokio team.
  • serde: the de facto standard for serializing/deserializing JSON, YAML, TOML, etc.
  • sqlx or diesel for database access — sqlx, with compile-time SQL query verification, is the option growing fastest.

Combined, this stack lets you build a JSON microservice over Postgres in similar code volume to its Go equivalent, with better type safety and more informative compile errors.

A Minimal axum Endpoint

use axum::{routing::get, Router, Json};
use serde::Serialize;

#[derive(Serialize)]
struct Health { status: &'static str }

async fn health() -> Json<Health> {
    Json(Health { status: "ok" })
}

#[tokio::main]
async fn main() {
    let app = Router::new().route("/health", get(health));
    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
    axum::serve(listener, app).await.unwrap();
}

Three important concepts:

  • #[tokio::main] boots the async runtime so main can be async.
  • Handlers are normal async functions — axum handles the binding between types and HTTP responses.
  • Json<T> is an extractor: as a return type, axum knows to serialize to JSON with the right header.

Why the Learning Curve Matters

Don’t downplay it: Rust has a curve. For someone coming from Go or TypeScript, the first months cost. Most common friction points:

  • Ownership and lifetimes. Rust doesn’t garbage-collect — the compiler validates each reference is valid. New concepts for most.
  • Async in Rust. Futures are lazy and depend on the runtime. Compile errors with Pin<Box<dyn Future...>> confuse at first.
  • Explicit errors. No exceptions. Every fallible function returns Result<T, E> and propagates with ?. Mindset shift.
  • Slow compilation. A medium project compiles in 30-60 seconds in debug, several minutes in release. Slows the iteration cycle.

To overcome it, accept 2-3 months of reduced productivity before feeling comfortable. Teams without that patience drop out at the two-week mark.

Pragmatic Comparison

Aspect Rust + axum Go + net/http Node + Express
Low p99 latency Excellent Very good Acceptable
Memory usage Very low Low Medium-high
Learning curve Steep Smooth Easy
Compile speed Slow Fast N/A (interpreted)
HTTP ecosystem Mature, growing Very mature Massive
CRUD productivity Medium High Very high

Rust wins where per-node performance really matters: services under sustained load, Kafka consumption at high rates, gateways multiplexing thousands of connections. In typical CRUD services where the bottleneck is the database, not the language, the gain is marginal and the learning cost doesn’t pay back.

Cases Where It Pays Off

In real projects I’ve seen Rust adopted successfully in:

  • Gateways and proxies. Handling 50,000 simultaneous connections with predictable memory.
  • Event processors. Kafka consumers with high throughput where Go was already showing GC pressure.
  • CPU-bound services. Image processing, voluminous log parsing, complex transformations.
  • Components embedded in other languages. Compile to a static library consumed from Python or Node via FFI.

For the typical “validate JSON, run 3 Postgres queries, return response” microservice, Go or Node remain more productive options.

Patterns That Save Suffering

If you adopt Rust on the backend, some practical advice:

  • anyhow for application errors, thiserror for library errors. Makes error propagation reasonable.
  • tracing for structured logs. Works well with OpenTelemetry and integrates with axum.
  • tower middleware. Uniform composition — reusable logging, auth, rate limiting.
  • Incremental build with mold linker. Significantly cuts iteration time on Linux.
  • cargo nextest. A much faster test runner than cargo test.

Conclusion

Rust on the backend is already a serious option in 2023, not a future promise. The tokio + axum combination is stable, performant, and reasonably productive once past the initial curve. But it’s not the right answer for every service — it’s a choice with clear trade-offs to make consciously, based on real workloads and not hype.

Follow us on jacar.es for more on backend architecture and tech-stack choice.

Entradas relacionadas