Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Arquitectura Desarrollo de Software

Redis: Caching Strategies Every Backend Should Know

Redis: Caching Strategies Every Backend Should Know

Actualizado: 2026-05-03

Redis[1] is the dominant in-memory cache in modern backends. But “put Redis in front” isn’t a strategy — it’s an ingredient. Teams that use Redis efficiently have internalised specific patterns of database interaction, TTL design, and invalidation. This article reviews the most important.

Key takeaways

  • The four main patterns are cache-aside, read-through, write-through, and write-behind: each has distinct trade-offs in consistency, latency, and complexity.
  • TTL design is the most underestimated decision and one of the biggest drivers of hit rate.
  • Thundering herd is the most common operational problem with caches under load: jitter, request coalescing, and early refresh mitigate it.
  • Explicit invalidation is essential for data where staleness has real consequences (prices, inventory).
  • A hit rate below 50% signals that the cache adds complexity with no net benefit.

The Four Main Patterns

Cache-aside (lazy loading)

The most common pattern. The application queries the cache first; on miss, queries the database and stores the result.

python
def get_user(user_id):
    cached = redis.get(f"user:{user_id}")
    if cached:
        return json.loads(cached)
    user = db.query("SELECT * FROM users WHERE id = %s", user_id)
    redis.setex(f"user:{user_id}", 3600, json.dumps(user))  # TTL 1h
    return user

Pros: simple, resilient if Redis is down (app keeps working with latency penalty). Cons: first access always misses; risk of stale data between writes.

Read-through

The application always queries the cache, which takes care of backfilling from the database when missing. Requires a mediating library or proxy.

Pros: simpler application logic. Cons: coupling between cache and database, less flexible when schemas change.

Write-through

On write, the application updates cache and database simultaneously. The cache always reflects current state.

Pros: cache is never stale. Cons: write latency increases touching two systems; data can be lost if the cache fails without write-back.

Write-behind (write-back)

The application writes only to the cache; an async process persists to the database later.

Pros: minimum write latency. Cons: data-loss risk if the cache fails before flush. Only appropriate where durability isn’t critical (analytics counters, intermediate logs).

TTL Design

A habitually underestimated decision. TTL too short generates many unnecessary misses; TTL too long produces stale data. Four practical rules:

  • Data that changes predictably: TTL = interval between changes. Exchange rate updated every 5 min → TTL 5-10 min.
  • Very stable data: long TTL (hours or days). User profile that changes once a month.
  • Data critical for precision: short TTL + explicit invalidation on writers.
  • Prices, inventory, data where staleness has consequences: explicit invalidation, not just TTL.
Redis official logo, the in-memory store whose TTL design and invalidation are the most critical aspects for hit rate

Thundering Herd

A classic problem under load: many concurrent requests hit the same cache miss simultaneously. All call the database at once, saturating it.

Three solutions that can be combined:

  • Lock in cache (request coalescing). The first miss takes a lock; subsequent requests wait and read from the cache once filled.
  • TTL with jitter. Instead of fixed TTL, TTL = base + random between 0 and N%. Avoids thousands of keys expiring simultaneously.
  • Early refresh. Before TTL expires, if the key is near expiry, one request refreshes in background while the rest serve the still-valid value.

Invalidation: The Hard Part

“There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton

Three invalidation strategies:

TTL only

Let it expire. Simple, may be sufficient for non-critical data. Doesn’t work when data freshness matters.

Explicit invalidation in writers

Every code path that writes to the database also invalidates related cache entries.

python
def update_user(user_id, data):
    db.update(...)
    redis.delete(f"user:{user_id}")
    redis.delete(f"user_list:active")  # invalidate related queries

Requires discipline and documentation: any writer that omits invalidation introduces inconsistencies.

Event-based invalidation (CDC)

A database change log (CDC) triggers events that invalidate the cache. More decoupled, but adds operational complexity — suitable for systems with multiple writers or high load.

Redis-Specific Patterns

Four native features that make a difference in production:

  • SCAN instead of KEYSKEYS blocks the entire server on large databases.
  • Pipelines for batch operations — 100 commands in a pipeline reduce 100 RTTs to 1.
  • Lua scripts for complex atomic operations (e.g., “increment this counter and return the value if < threshold”).
  • Right data structures: Sorted Set for leaderboards, HyperLogLog for approximate cardinality, Stream for event logs, GEO for geolocation.

For related async messaging patterns, see RabbitMQ for message queues. In Rust backends, Redis clients integrate well with tokio and axum via redis-rs. The Grafana Stack for observability lets you monitor hit rate, latency, and Redis memory with prebuilt dashboards.

When Not to Use Cache

Three signals that cache isn’t the answer:

  • Low hit rate (<50%). The cache only adds latency and complexity.
  • Data that changes so fast that TTL would be <1s. Overhead exceeds value.
  • Unique data per request (complex searches with highly variable parameters). No reuse opportunity.

Conclusion

Redis is a powerful tool, but not a magical one. Teams that benefit have invested in conscious pattern design, domain-appropriate TTL, well-orchestrated invalidation, and thundering-herd defence. Without those foundations, “adding Redis” frequently worsens systems instead of improving them.

Was this useful?
[Total: 10 · Average: 4.4]
  1. Redis

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.