Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Tecnología

Dragonfly: the modern cache inspired by Redis

Dragonfly: the modern cache inspired by Redis

Actualizado: 2026-05-03

Dragonfly turns three as a Redis-protocol-compatible alternative, and in 2025 we can talk about it with data in hand rather than launch-day marketing. The conversation has shifted: two years ago the question was whether it would survive the weight of Redis, and today the question is whether it makes sense to deploy it by default for certain patterns.

Key takeaways

  • Dragonfly uses a multithreaded shared-nothing architecture with one thread per data shard — an 8-core node uses all 8, while Redis uses only 1.
  • The fork-free snapshot algorithm keeps latency flat while persisting — on tens-of-GB workloads, it is the difference between a visible latency spike every few minutes and a flat line.
  • Redis protocol compatibility is solid for standard cases (BullMQ, Sidekiq, PHP caches) — rough edges appear with dynamic modules like RedisSearch and RedisJSON.
  • The pattern where Dragonfly clearly wins is the dense cache: a 128 GB, 16-core node can replace a 4–6 node Redis cluster.
  • Since Valkey was born as a Redis 7.2 fork under the Linux Foundation (Apache 2.0), the decision is between three options: Dragonfly, official Redis, or Valkey — the determining factor is usually licensing more than pure performance.

What makes Dragonfly different

Redis has been the reference for over a decade, and its single-threaded-per-node model is intentional: it avoids locks and simplifies reasoning about atomicity. Dragonfly starts from the same command model and the same network interface, but internally it is a multithreaded system with a shared-nothing architecture. Each thread manages its own slice of data and coordination between threads happens through message passing, not shared locks.

The practical effect is that a Dragonfly node with eight cores can use all eight, while a traditional Redis node uses one and leaves the rest of the machine idle. For light loads this does not matter. For dense loads, with peaks of hundreds of thousands of requests per second, Dragonfly keeps fewer nodes, each better used, at the cost of a configuration slightly more sensitive to machine size.

The other major architectural change is how snapshots work. Redis clones the process via fork to dump memory to disk, which works fine on small instances but degrades latency when memory used is large. Dragonfly implements its own snapshot algorithm that does not rely on fork, so latency stays flat while persistence happens. This sounds like a detail, but on workloads of dozens of gigabytes it is the difference between a visible latency spike every few minutes and a flat line.

Real compatibility with Redis

Dragonfly is marketed as a drop-in replacement for the Redis protocol, and in 2025 that claim is much more solid than in 2022. The main commands are covered, the usual data structures work, and most official Redis clients connect without changes. I have tested BullMQ readers, Sidekiq queues, and PHP application caches against Dragonfly and the switch was transparent in all three cases.

Where the rough edges show up is in less central but critical features:

  • Replication with the native Redis protocol works, but with limitations in complex topologies.
  • Redis Streams are supported but some very specific consumers notice subtle differences.
  • Extensions through dynamic modules — such as RedisSearch or RedisJSON — do not load: Dragonfly implements part of that functionality natively, with its own search and JSON support, but with a slightly different API.

Cases where it pays off

After seeing several deployments, the pattern where Dragonfly eats Redis is the dense cache. A single Dragonfly node with 128 gigabytes of memory and sixteen cores can replace a small Redis cluster of four or six nodes, simplifying operations and cutting infrastructure cost appreciably.

The second case is unpredictable traffic spikes. With traditional Redis, a sudden write surge can saturate the single thread and queue up. With Dragonfly, load spreads across threads and degradation is smoother. This matters especially in e-commerce with campaign peaks, in media platforms with unpredictable virality, or in public APIs with bursty traffic.

The third, less obvious case, is persistence with large state. If your use case requires reloading millions of keys after a restart, the speed difference on snapshots and recovery can save minutes of downtime per deployment.

In-memory data fabric architecture with persistence, replication, and access layers — the type of design Dragonfly modernizes compared to Redis with its multithreaded approach and fork-free snapshots

Where sticking with Redis still makes sense

Not all is upside. There are three scenarios where I would still pick Redis, or now Valkey:

  1. The team already has deep Redis experience and internal tooling built around it. Changing engines for a marginal gain does not offset the training cost.
  2. The application depends on modules like RedisSearch, RedisTimeSeries, or RedisGraph in their official versions. Even when Dragonfly has equivalents, functional parity is not yet complete.
  3. Enterprise clients who pay for direct commercial support. Redis Ltd. and now Valkey through the Linux Foundation offer mature support channels with SLAs. Dragonfly has commercial support through Dragonfly Labs, but the partner ecosystem and community depth are not yet comparable.

The Valkey factor in the equation

Since March 2024, with Redis changing its license to RSAL and SSPL and Valkey being born under the Linux Foundation, the comparison is no longer just Dragonfly versus Redis. Valkey is a direct fork of Redis 7.2 with open governance and BSD license, and in 2025 the big public clouds have migrated their managed offerings toward Valkey.

My read today:

  • For greenfield deployments with no inertia: Dragonfly deserves a spot on the shortlist.
  • For existing deployments on open Redis: Valkey is the lowest-friction path.
  • For existing deployments on commercial Redis versions: the decision depends more on vendor relationships than on any technical datapoint.

My read

Dragonfly has matured into a defensible option rather than an experiment. Its multithreaded architecture fits dense workloads and modern hardware, and its fork-free snapshot algorithm solves a real problem that Redis has carried since its origin. But it is not a silver bullet: the Redis ecosystem, community, and commercial support remain real advantages that take time to rebuild.

The practical recommendation is to test it in staging with a copy of real traffic before making final decisions. The savings metrics tend to be flashy, but compatibility edges only surface under real testing. And in caches, the rare edges are the ones that break production at the worst moment.

Was this useful?
[Total: 10 · Average: 4.4]

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.