Backend Development 7 min read

Cache Consistency Strategies: TTL, Delayed Double Delete, Cache‑Aside, and Message‑Queue Approaches

This article examines cache consistency challenges in Redis-backed systems and compares several update strategies—including TTL, delayed double‑delete, cache‑aside, and message‑queue approaches—detailing their workflows, code examples, advantages, and drawbacks to guide backend developers toward reliable cache invalidation.

Full-Stack Internet Architecture
Full-Stack Internet Architecture
Full-Stack Internet Architecture
Cache Consistency Strategies: TTL, Delayed Double Delete, Cache‑Aside, and Message‑Queue Approaches

When accelerating system performance, caches such as Redis are introduced. The typical read flow retrieves data from the cache, falling back to the database if the cache misses.

Three naive update orders are presented:

Update the database first, then update the cache.

Delete the cache first, then update the database.

Update the database first, then delete the cache.

2 Consistency Solutions

2.1 Cache TTL

A simple method is to set a TTL on cache entries for non‑critical data, allowing the cache to expire and refresh from the database automatically.

2.2 Delayed Double‑Delete Strategy

This approach deletes the cache, updates the database, sleeps for a short period, and deletes the cache again to avoid stale data caused by network delays.

public  void write(String key,Object value){
    redis.delKey(key);
    db.updateValue(value);
    Thread.sleep(1000); // 再次删除
    redis.delKey(key);
}

The sleep duration should be tuned to the business logic latency, ensuring that read requests finish before the second delete occurs.

2.3 Asynchronous Second Delete

To avoid performance impact from the sleep, the second delete can be performed asynchronously in a separate thread.

2.4 Cache‑Aside Pattern (Update‑Then‑Invalidate)

Widely used at Facebook, this pattern consists of three steps:

Miss : Application reads from cache; on miss, it fetches from the database and populates the cache.

Hit : Application reads from cache and returns the value.

Update : Application writes to the database first, then invalidates the cache.

If the cache invalidation fails, inconsistency may still occur.

2.5 Message Queue Confirmation

Using a message queue’s consumption acknowledgment to trigger cache deletion ensures eventual consistency but introduces middleware overhead and potential short‑term inconsistency due to message delay.

2.6 Dedicated Program + Binlog Subscription

A dedicated subscriber (e.g., Canal) monitors MySQL binlog events; the application receives these events and deletes the corresponding cache entries, providing a reliable delete‑on‑write mechanism.

3 Summary

Analysis shows that cache updates should be performed by deleting the cache rather than updating it directly. Three practical methods are:

Set a TTL for non‑sensitive data.

Delete the cache before updating the database, optionally using a delayed double‑delete to handle race conditions.

Update the database first, then delete the cache using binlog consumption and/or message‑queue mechanisms.

4 References

Java backend resources and distributed lock implementations are listed for further reading.

BackendCacheRedisMessage Queueconsistencycache aside
Full-Stack Internet Architecture
Written by

Full-Stack Internet Architecture

Introducing full-stack Internet architecture technologies centered on Java

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.