Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Arquitectura

SQLite and DuckDB: When Each Is the Right Choice

SQLite and DuckDB: When Each Is the Right Choice

Actualizado: 2026-05-03

SQLite[1] and DuckDB[2] share something striking: both are embedded databases — a library living in your process, no separate server. And yet they solve different problems. SQLite is the king of small transactions and local persistence; DuckDB is the king of fast columnar analytics without infrastructure. Choosing wrong has real cost, and combining them where it makes sense multiplies the value of both.

The Common Ground: Embedded

Both discard the client-server model. Your application imports the library and runs SQL on a file. No systemd, no ports, no users, no replication as a central concern. This drastically reduces operational complexity:

  • SQLite: a .db file (plus some temporary WAL). Dumps, copies, and migrations are cp.
  • DuckDB: a .duckdb file (or in memory, or directly on Parquet).

This embedded pattern is perfect for mobile apps, desktop, single-instance services, analytic notebooks, and local ETL pipelines. It’s the anti-pattern for systems with dozens of concurrent services against the same database.

Structural Difference: Rows vs Columns

The separation isn’t cosmetic. It’s architectural:

  • SQLite stores data by rows. Each row is a contiguous block. Perfect for SELECT * FROM users WHERE id = 42 — read a whole row fast.
  • DuckDB stores data by columns. Each column is a contiguous block. Perfect for SELECT AVG(amount) FROM transactions WHERE year=2023 — read few fields from many rows.

OLTP vs OLAP, in embedded form.

When to Choose SQLite

Scenarios where SQLite is clearly right:

  • Mobile app with local DB: iOS, Android, and desktop. SQLite is the de facto standard on these platforms.
  • App configuration and persistence: Firefox, Chrome, and dozens of applications use it this way.
  • Single-node web service with mixed moderate read/write.
  • Prototype or MVP before migrating to PostgreSQL.
  • Durable transactional local backup with full ACID.
  • WASM in the browser via sql.js[3] or similar.

SQLite does what it should do well: small transactions, high integrity, low latency per operation.

When to Choose DuckDB

Scenarios where DuckDB shines:

  • Log analysis: millions of rows, aggregate queries.
  • Query Parquet/CSV files directly without importing: SELECT * FROM 'data.parquet' works out of the box.
  • Replace pandas in mid-sized pipelines — more memory-efficient, with native SQL.
  • Ad-hoc notebook analysis: Jupyter + DuckDB is a very powerful combination.
  • Consolidate OLTP bases for reports without touching production (CDC + DuckDB).
  • Vectorized processing over tabular data without distributed infrastructure.

DuckDB is, in practice, what pandas-with-native-SQL should have been.

Python: Both in the Same Session

Both are trivial from Python:

python
# SQLite — OLTP, transactional persistence
import sqlite3
con = sqlite3.connect('app.db')
con.execute("CREATE TABLE users (id INT, name TEXT)")

# DuckDB — OLAP, fast analysis
import duckdb
con = duckdb.connect('analytics.duckdb')
# Query Parquet directly, no import needed
df = con.execute("SELECT * FROM 'events.parquet' WHERE type='signup'").df()

DuckDB has especially polished Python integration: pandas and Polars DataFrames are consumed and produced without serious overhead.

Using Them Together

Combining them is a very productive pattern:

  • OLTP in SQLite, OLAP in DuckDB: the app writes to SQLite; a periodic job dumps changes to DuckDB/Parquet for analysis.
  • DuckDB reads SQLite directly via the sqlite_scanner extension: SELECT * FROM sqlite_scan('app.db', 'users'). No manual ETL.
  • Archive old SQLite data to Parquet: DuckDB queries historicals without bloating the production DB.

The “SQLite for hot + DuckDB for cold” pattern reduces complexity while keeping each engine in its optimal terrain.

Real Limitations

SQLite is not for:

  • High concurrent writes — even with WAL, the write lock is global.
  • Multiple processes writing simultaneously — subtle race conditions exist.
  • Giant DBs (>100 GB) — technically possible, operationally awkward.
  • Built-in replication — tools like Litestream[4] fill this gap.

DuckDB is not for:

  • Real OLTP — no fine-grained locking, concurrency designed for analysis.
  • Sub-millisecond latency — fast but optimised for analytic queries.
  • Workloads with many concurrent writes — wrong use case.

Quick Verdict

How to decide in ten seconds:

  • Short transactions, high integrity, one active process? SQLite.
  • Analytic queries over large volumes, few writes? DuckDB.
  • Mobile, desktop, or simple service app? SQLite.
  • Notebooks, local ETL, replace pandas? DuckDB.
  • Both? Use them together.

Any honest doubt resolves by looking at the main query: if it’s almost always a WHERE id=?, SQLite. If it’s a GROUP BY over many rows, DuckDB.

Conclusion

SQLite and DuckDB are complementary tools, not competitors. Both represent the “embedded matters” philosophy applied to different problems. For local OLTP, SQLite is unbeatable for simplicity and robustness. For infra-less OLAP, DuckDB has made many complex solutions obsolete. Knowing both and applying the right one is a real technical advantage — and combining them where it makes sense multiplies the value of both.

Was this useful?
[Total: 13 · Average: 4.5]
  1. SQLite
  2. DuckDB
  3. sql.js
  4. Litestream

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.