Dragonfly turns three as a Redis-protocol-compatible alternative, and in 2025 we can talk about it with data in hand rather than launch-day marketing. The conversation has shifted: two years ago the question was whether it would survive the weight of Redis, and today the question is whether it makes sense to deploy it by default for certain patterns. This post covers what has changed, where it fits, and where it is still prudent to stick with what works.
What makes Dragonfly different
Redis has been the reference for over a decade, and its single-threaded-per-node model is intentional: it avoids locks and simplifies reasoning about atomicity. Dragonfly starts from the same command model and the same network interface, but internally it is a multithreaded system with a shared-nothing architecture. Each thread manages its own slice of data and coordination between threads happens through message passing, not shared locks.
The practical effect is that a Dragonfly node with eight cores can use all eight, while a traditional Redis node uses one and leaves the rest of the machine idle. For light loads this does not matter: a properly sized Redis handles tens of thousands of requests per second on a single core. For dense loads, with peaks of hundreds of thousands of requests per second, Dragonfly keeps fewer nodes, each better used, at the cost of a configuration slightly more sensitive to machine size.
The other major architectural change is how snapshots work. Redis clones the process via fork to dump memory to disk, which works fine on small instances but degrades latency when memory used is large, because the operating system has to duplicate pages when they change. Dragonfly implements its own snapshot algorithm that does not rely on fork, so latency stays flat while persistence happens. This sounds like a detail, but on workloads of dozens of gigabytes it is the difference between a visible latency spike every few minutes and a flat line.
Real compatibility with Redis
Dragonfly is marketed as a drop-in replacement for the Redis protocol, and in 2025 that claim is much more solid than in 2022. The main commands are covered, the usual data structures work, and most official Redis clients connect without changes. I have tested BullMQ readers, Sidekiq queues, and PHP application caches against Dragonfly and the switch was transparent in all three cases.
Where the rough edges show up is in less central but critical features for certain environments. Replication with the native Redis protocol works, but with limitations in complex topologies. Redis Streams are supported but some very specific consumers notice subtle differences. Extensions through dynamic modules, such as RedisSearch or RedisJSON, do not load: Dragonfly implements part of that functionality natively, with its own search and JSON support, but with a slightly different API. If your application depends on a specific module, it is worth checking calmly before migrating.
Cases where it pays off
After seeing several deployments, the pattern where Dragonfly eats Redis is the dense cache. A single Dragonfly node with 128 gigabytes of memory and sixteen cores can replace a small Redis cluster of four or six nodes, simplifying operations and cutting infrastructure cost appreciably. For shops running Redis as a cache layer in front of a relational database, this consolidation is the main argument.
The second case is unpredictable traffic spikes. With traditional Redis, a sudden write surge can saturate the single thread and queue up. With Dragonfly, load spreads across threads and degradation is smoother. This matters especially in e-commerce with campaign peaks, in media platforms with unpredictable virality, or in public APIs with bursty traffic.
The third, less obvious case, is persistence with large state. If your use case requires reloading millions of keys after a restart, the speed difference on snapshots and recovery can save minutes of downtime per deployment, which adds up in environments with frequent releases.
Where sticking with Redis still makes sense
Not all is upside. There are three scenarios where I would still pick Redis, or now Valkey, its Linux Foundation-led fork. The first is when the team already has deep Redis experience and internal tooling built around it. Changing engines for a marginal gain does not offset the training cost and the work of rewriting operational scripts.
The second is when the application depends on modules like RedisSearch, RedisTimeSeries, or RedisGraph in their official versions. Even when Dragonfly has equivalents, functional parity is not yet complete and semantic differences can bite in production.
The third, more delicate, is enterprise clients who pay for direct commercial support. Redis Ltd. and now Valkey through the Linux Foundation offer mature support channels, with SLAs and accumulated expertise in complex deployments. Dragonfly has commercial support through its company, Dragonfly Labs, but the partner ecosystem and community depth are not yet comparable.
The Valkey factor in the equation
Since March 2024, with Redis changing its license to RSAL and SSPL and Valkey being born under the Linux Foundation, the comparison is no longer just Dragonfly versus Redis. Valkey is a direct fork of Redis 7.2 with open governance and BSD license, and in 2025 the big public clouds have migrated their managed offerings toward Valkey. That changes the math: today the choice is usually Dragonfly, official Redis, or Valkey, and the answer depends more on the license model than on a specific performance number.
My read today is that, for greenfield deployments where there is no inertia, Dragonfly deserves a spot on the shortlist. For existing deployments on open Redis, Valkey is the lowest-friction path. For existing deployments on commercial Redis versions, the decision depends more on vendor relationships than on any technical datapoint.
My read
Dragonfly has matured into a defensible option rather than an experiment. Its multithreaded architecture fits dense workloads and modern hardware, and its fork-free snapshot algorithm solves a real problem that Redis has carried since its origin. But it is not a silver bullet: the Redis ecosystem, community, and commercial support remain real advantages that take time to rebuild.
The practical recommendation is to test it in staging with a copy of real traffic before making final decisions. The savings metrics tend to be flashy, but compatibility edges only surface under real testing. And in caches, the rare edges are the ones that break production at the worst moment.
For now, on jacar.es I keep Redis 8 on the main infrastructure out of inertia and the weight of its ecosystem, but I am eyeing an auxiliary service where cache density justifies testing Dragonfly seriously. If that test goes well, this conversation might shift in the coming months.