How to Keep Database and Redis Cache Consistent Under High Concurrency

This article examines the common data‑consistency challenges when writing to both a database and a Redis cache, evaluates four write‑order strategies, and presents the most reliable approach—writing to the database first then deleting the cache—along with retry mechanisms using scheduled jobs, message queues, and binlog listeners.

Java Tech Enthusiast
Java Tech Enthusiast
Java Tech Enthusiast
How to Keep Database and Redis Cache Consistent Under High Concurrency

Database and cache (e.g., Redis) double‑write consistency is a language‑agnostic problem that becomes critical in high‑concurrency scenarios. The article first outlines the typical cache‑first read flow and points out that if a record is updated in the database after being cached, the cache can become stale.

Common Solutions

Write cache first, then write database

Write database first, then write cache

Delete cache first, then write database

Write database first, then delete cache

1. Write Cache First, Then Database

This approach can produce dirty data when the cache write succeeds but the database write fails (e.g., network outage). The cache would contain a value that does not exist in the database, leading to severe consistency errors.

2. Write Database First, Then Cache

While avoiding the "dirty cache" issue, this method introduces two major problems in high‑concurrency environments:

Cache‑write failure: If the cache update fails after the database commit, the cache holds old data while the DB holds new data.

Resource waste: Every write forces an immediate cache update, which can be costly if the cached value requires heavy computation.

Under concurrent writes, race conditions can cause the newer DB value to be overwritten by an older cache write, resulting in inconsistency.

3. Delete Cache First, Then Write Database

Deleting the cache before the DB write can still lead to inconsistency when a read request occurs between the delete and the DB write, causing the stale DB value to be cached again.

Cache Double Delete

To mitigate the race, the cache is deleted twice: once before the DB write and once after, with a short delay (e.g., 500 ms) before the second delete to ensure any read‑‑‑write interleaving has completed.

4. Write Database First, Then Delete Cache (Recommended)

This strategy minimizes the probability of inconsistency. The DB is updated first, then the cache is removed. If a read occurs before the cache is deleted, it may return stale data, but the subsequent delete will clean it up. Edge cases where the cache expires exactly when a read‑‑‑write race occurs are extremely rare.

It is recommended to use the "write DB then delete cache" approach; although it cannot guarantee 100% consistency, its failure probability is the lowest among the alternatives.

Handling Cache‑Delete Failures

If cache deletion fails, a retry mechanism is required:

Synchronous retry (up to 3 times) – may affect performance under high load.

Asynchronous retry – preferred for high‑throughput services.

Asynchronous retry options include:

Spawning a dedicated thread per retry (risk of OOM).

Using a thread pool (risk of data loss on restart).

Writing to a retry table and using elastic‑job for scheduled retries.

Sending a retry message to a message queue (MQ) and handling it in a consumer.

Subscribing to MySQL binlog and deleting the cache when an update event is observed.

5. Scheduled‑Task Retry

When a cache delete fails, the record is inserted into a retry table. A scheduled task reads the table, attempts up to five deletions, increments a retry counter, and marks the record as failed after exhausting attempts. elastic‑job is suggested for its sharding capabilities.

6. MQ‑Based Retry

After a failed cache delete, an MQ message is produced. The consumer retries deletion up to five times; on persistent failure, the message is moved to a dead‑letter queue. rocketmq is recommended for its built‑in retry and DLQ support.

7. Binlog Listener Approach

Instead of embedding retry logic in business code, a binlog subscriber (e.g., using canal) watches MySQL binlog events. After a DB update, the subscriber deletes the cache. If the delete fails, the same retry mechanisms (scheduled task or MQ) can be applied.

Overall, the article provides a comprehensive comparison of double‑write strategies, highlights pitfalls in high‑concurrency environments, and offers practical solutions—double delete with delay, asynchronous retries, scheduled jobs, MQ, and binlog listeners—to achieve robust DB‑cache consistency.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

RedisHigh ConcurrencybinlogRocketMQCache Consistencyretry mechanismElastic-Jobdatabase write order
Java Tech Enthusiast
Written by

Java Tech Enthusiast

Sharing computer programming language knowledge, focusing on Java fundamentals, data structures, related tools, Spring Cloud, IntelliJ IDEA... Book giveaways, red‑packet rewards and other perks await!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.