SQLite in Production: Not Just for Mobile
Actualizado: 2026-05-03
The myth: SQLite is for mobile apps and prototypes. The reality: thousands of production applications — from Tailscale to Fly.io, Expensify, or small SaaS — use it as their main DB. The SQLite + WAL mode + Litestream + (optionally) LiteFS combination enables scaling beyond where many teams assume they need PostgreSQL. This article covers how, when, and its real limits.
Key takeaways
- WAL mode turns SQLite into an engine with concurrent reads that don’t block writes — the change that makes server-side use viable.
- Litestream streams the WAL to S3 in near-realtime: durability comparable to Postgres without a separate DB server.
- LiteFS (from Fly.io) replicates SQLite between multiple nodes with strong consistency via leasing.
- SQLite beats Postgres for 1-process apps, simple queries, and up to ~100 GB of data — where zero DB latency matters.
- SQLite doesn’t scale well with multiple writing processes simultaneously or massive analytical queries (that’s DuckDB territory).
Why Server-Side SQLite Is Viable
Traditional assumptions are obsolete:
- “Doesn’t scale”: false. SQLite handles thousands of writes/s and hundreds of thousands of reads/s on modest hardware.
- “No concurrent writes”: historically true, but WAL mode mitigates much.
- “No replication”: Litestream, LiteFS, rqlite solve.
- “No backup tool”: Litestream streams to any S3-compatible storage.
For 1-process / 1-instance apps, SQLite is spectacularly productive.
WAL Mode: The Fundamental Change
Enable Write-Ahead Logging:
PRAGMA journal_mode = WAL;Benefits:
- Concurrent reads don’t block writes.
- Concurrent writes still serialise but with less contention.
- Better durability vs traditional journal_mode.
- Incremental WAL checkpointing.
Recommended Production Pragmas
The difference between defaults and optimised config can be 3-10×:
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL; -- durability/perf trade-off
PRAGMA cache_size = -64000; -- 64 MB cache
PRAGMA foreign_keys = ON;
PRAGMA busy_timeout = 5000; -- 5s before SQLITE_BUSY
PRAGMA temp_store = MEMORY;
PRAGMA mmap_size = 268435456; -- 256 MB memory-mappedLitestream: Replication to S3
Litestream[1] streams the WAL to S3 (or any compatible) in near-realtime:
# /etc/litestream.yml
dbs:
- path: /data/app.db
replicas:
- type: s3
bucket: my-backups
path: app.db
region: eu-west-1
access-key-id: ${AWS_ACCESS_KEY_ID}
secret-access-key: ${AWS_SECRET_ACCESS_KEY}For productive backup, Litestream makes SQLite nearly comparable to Postgres in durability without a separate DB server.
LiteFS: Multi-Node Replication
LiteFS[2] (from Fly.io) replicates SQLite between nodes using FUSE filesystem:
- Primary + replicas: writes to primary, reads on any node.
- Strong consistency with distributed leasing.
- Automated failover.
When SQLite Beats Postgres
SQLite is the best choice when:
- 1-process app (single-instance VM, monolithic Lambda).
- Simple to moderate queries, no cross-server joins.
- Data volume up to ~100 GB comfortably, ~1 TB with discipline.
- Zero latency to DB (same process).
- Simple deployment without a separate DB server.
When SQLite Isn’t Enough
Honestly:
- Multiple writing processes: write serialisation is the bottleneck.
- Multiple app instances: can’t shard SQLite between servers without complexity.
- Massive analytical queries: use DuckDB.
- Postgres-specific extensions (serious pgvector, PostGIS).
- DB-level roles/permissions: SQLite doesn’t have them.
Conclusion
SQLite is a serious option for single-instance production applications. With WAL mode, optimised pragmas, Litestream, and optionally LiteFS or rqlite for HA, it covers cases many teams assume require PostgreSQL. The operational advantage is huge: no DB server, no network, no permissions, no complex backup. For the vast majority of small-to-medium apps, starting with SQLite and growing to Postgres only if really needed is the correct pragmatic strategy.