How to Ensure Cache‑Database Consistency: Strategies, Pitfalls & Best Practices

This article examines common cache‑and‑database consistency approaches, compares their drawbacks, and recommends a delayed double‑delete strategy with practical code examples to keep data reliable in read‑heavy, write‑light systems.

NiuNiu MaTe
NiuNiu MaTe
NiuNiu MaTe
How to Ensure Cache‑Database Consistency: Strategies, Pitfalls & Best Practices

Introduction

An interview question often asked: How can we guarantee consistency between cache and database? Below are several practical solutions, each with its own advantages and risks.

Scheme Analysis

Four typical update orders are considered:

Update cache first, then update the database.

Update the database first, then update the cache (dual‑write).

Delete the cache first, then update the database.

Update the database first, then delete the cache.

Each scheme is explained in detail.

Scheme 1: Update Cache → Update Database

This approach is unsafe because if the cache update succeeds but the database update fails, the data becomes inconsistent.

Scheme 2: Update Database → Update Cache

Known as dual‑write. In concurrent update scenarios, stale data can be written to the cache.

updateDB();
updateRedis();

If another request modifies the data between the two operations, the cache will hold outdated information.

Scheme 3: Delete Cache → Update Database

Before updating the database, a read request may fetch stale data and write it back to the cache.

deleteRedis();
updateDB();

Example: a read request between the two operations stores old data in the cache.

Scheme 4: Update Database → Delete Cache

If a read occurs before the database update and the cache expires, the read will fetch old data, which may later be written back to the cache after the update, causing inconsistency.

updateDB();
deleteRedis();

Comparison of Schemes

Schemes 1 and 2 share the drawback of possible dirty data in concurrent write scenarios. Schemes 3 and 4 suffer from master‑slave replication delay and cache‑deletion failures, both of which can also produce stale data.

To mitigate these issues, a delayed double‑delete strategy with retry mechanisms is proposed.

Recommended Scheme: Delayed Double Delete

Apply a two‑phase cache deletion around the database update.

public void write(String key, Object data) {
  redis.del(key);
  db.update(data);
  Thread.sleep(1000); // pause to let pending reads finish
  redis.del(key);
}

Delete the cache first.

Write to the database.

After a short pause, delete the cache again.

This ensures that any read that started before the update finishes will not repopulate the cache with stale data.

Implementation tips:

Make the second deletion asynchronous (e.g., submit to a delayed task queue).

Handle deletion failures with a retry mechanism or by pushing failed keys to a message queue.

Common delay tools include Thread.sleep, JDK scheduled thread pools, Quartz, DelayQueue, Netty’s HashWheelTimer, or RabbitMQ delayed queues.

Real‑World Scenario

In a product‑center service with read‑heavy, write‑light traffic, strict cache‑DB consistency is required because write operations trigger downstream MQ notifications.

Write‑Cache Strategies

Set cache key expiration time.

Perform DB operation first, then invalidate the cache.

Mark writes to force primary DB usage (e.g., using Meituan middleware).

Listen to binlog changes via middleware and delete cache as a fallback.

Read‑Cache Strategies

Determine whether to query the primary DB.

If primary DB is required, use the middleware marker to read from it.

Otherwise, check the cache for data.

If cache hit, return cached data.

If cache miss, read from DB and populate the cache.

Important Note on Cache Expiration

If a cache key has an expiration time, any inconsistency is temporary. Without expiration, stale data persists until the next update, so always set an appropriate TTL.

Conclusion

For read‑heavy, write‑light services, the delayed double‑delete method offers a reliable way to keep cache and database in sync while minimizing performance impact.

Cache ConsistencyDelayed Double Deleteread‑write strategy
NiuNiu MaTe
Written by

NiuNiu MaTe

Joined Tencent (nicknamed "Goose Factory") through campus recruitment at a second‑tier university. Career path: Tencent → foreign firm → ByteDance → Tencent. Started as an interviewer at the foreign firm and hopes to help others.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.