SQLite has been the world’s most deployed database for twenty-five years, but for most of that time it was dismissed as a toy option for servers. That perception has shifted in the last three years thanks to three forces: mature WAL, NVMe disks universal on even cheap VPS plans, and projects like Litestream and libSQL that solved the two classic objections, lack of replication and lack of remote access. With that context, the pattern of a single SQLite database serving midsize web applications has moved from curiosity to reasonable default in many cases. This post collects what works and what doesn’t after several years of watching it in production.
WAL mode changes everything
The gap between SQLite in the default rollback journal mode and SQLite with WAL enabled is so wide that talking about both as the same database is misleading. Rollback mode serializes writes and reads on a single file lock, limiting useful concurrency to a handful of queries per second. WAL mode, write-ahead logging in a separate file, allows concurrent reads while a write is active and restricts collisions to simultaneous writers. For web apps with a 95 to 5 ratio of reads to writes, the performance difference is two orders of magnitude.
Enabling WAL is a single statement at database open time, but it has implications worth knowing. The WAL file grows while transactions are active and compacts automatically when it reaches a threshold, 1000 pages by default. Under high write loads, raising that threshold helps, at the cost of more memory. The WAL file and main file must share a filesystem; trying to put them on different volumes is a classic mistake that breaks atomicity.
The second server performance lever is tuning page size and cache. SQLite defaults to 4 KB pages, which fits most loads, but for tables with small rows and many random reads, bumping to 8 KB or 16 KB at database creation can improve filesystem cache efficiency. The per-connection internal cache, controlled by cache_size, is better expressed in negative form to mean kilobytes instead of pages, and raised to several megabytes when memory allows. These tweaks aren’t dramatic individually, but together they separate a smooth application from one that feels slow for no visible reason.
Litestream and replication was the turning point
SQLite’s historical Achilles heel on servers was backup. Until Litestream arrived in 2021, the only decent way to back up a live database was to stop writes, copy the file, and resume. On any system with real traffic that isn’t viable. Litestream solved the problem by streaming the WAL to an object store like S3 in near real time, enabling point-in-time restore with at most a few seconds of data loss.
What started as a clever trick has become the piece that makes serious production SQLite viable. In the projects I’ve seen deployed, Litestream pointed at an S3-compatible bucket like Hetzner Object Storage or Cloudflare R2 costs pennies a month in storage and protects against disk failure or server destruction. The restore process is one command: download the bucket state, reconstruct the database to the chosen moment, and launch the app. I’ve timed full restores of 40 GB databases in under six minutes.
The most recent evolution is libSQL, the SQLite fork Turso maintains. It added embedded synchronized replication, allowing a primary server database and local replicas on each client or in regional nodes. For geographically distributed applications, libSQL solves the local-read problem without having to run multi-region Postgres. The stable 1.0 release of Turso Database landed this autumn and already shows up in latency-critical web deployments.
Loads where SQLite is still the best choice
My experience is that SQLite on server with WAL and Litestream covers without blinking most web applications with fewer than a thousand concurrent users and databases under 100 GB. That includes many management systems, internal dashboards, moderately dynamic content sites, midsize APIs, and essentially anything short of a shop with massive spikes or a social network with write-intensive load. The most visible success story is sqlite.org itself, which serves its entire site from a SQLite with several million monthly page views.
The second niche where it shines is development and testing. Being able to swap Postgres for SQLite in the CI environment cuts test startup time from tens of seconds to under two. Most standard SQL is compatible, and when something diverges, like window function syntax or more exotic data types, it surfaces early. Several teams I know run SQLite in development and Postgres in production, accepting the small divergence in exchange for a much faster feedback loop.
The third space is desktop and mobile applications that sync to the cloud. SQLite never left there, but with libSQL’s bidirectional sync the pattern of a client-local database replicating against a central server has simplified drastically.
Where SQLite is still a bad idea
There are cases where SQLite on server doesn’t pay off and it’s worth being honest. The first is any sustained intensive concurrent write workload. SQLite serializes writers by design, and though WAL reduces friction, a system with hundreds of write transactions per second will hit a ceiling. Postgres, MariaDB or even MySQL with InnoDB absorb orders of magnitude more concurrent writes.
The second is a multi-process architecture with several application servers accessing the same file. SQLite isn’t meant for networked file access: locking over NFS and similar systems is unreliable and has been the source of corruptions in real projects. If your architecture requires more than one server reading and writing the same database, SQLite isn’t the right tool, period.
The third is needing advanced Postgres features like serious geospatial extensions, production-grade full-text search, or two-phase transactional support. SQLite has minimal versions of many things, but comparing its FTS5 text search to what Postgres offers with tsvector and pg_trgm is like comparing a bicycle to a car. If the problem justifies the advanced tools, the advanced tools are Postgres.
When it pays off
My reading after several years of watching real deployments is that the question is no longer whether SQLite can sustain a web server, because it can. The question is whether your concrete case fits its strength profile. If your app reads far more than it writes, fits on a single server, doesn’t need synchronous multi-node replication, and you value the operational simplicity of a file with no server process, SQLite with WAL and Litestream is probably the lowest total cost option in engineering and operation.
If instead your system will grow to several servers, have high concurrent writes, or you already have Postgres running painlessly, there’s no reason to switch. The beauty of SQLite in 2025 isn’t that it replaced Postgres, but that it stopped being an eccentric option for a certain range of problems. That range is larger than most people assume, and taking it seriously can save weeks of infrastructure assembly that adds nothing to a service with 200 monthly active users.