Backend Development 8 min read

Ensuring Data Consistency Between MySQL and Redis in High‑Concurrency Scenarios

The article explains why data inconsistency occurs between MySQL and Redis under high concurrency, analyzes cache‑delete timing issues, and presents two solutions—delayed double‑delete and asynchronous cache updates via MySQL binlog—detailing implementation steps, advantages, drawbacks, and practical considerations.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Ensuring Data Consistency Between MySQL and Redis in High‑Concurrency Scenarios

1. Causes of Data Inconsistency

In high‑concurrency scenarios, massive requests directly hitting MySQL can cause performance problems, so Redis is used as a cache to reduce database load. However, MySQL and Redis are different databases, making consistency between them critical.

1.1 Cache Delete Order Issues

Whether the cache is deleted before or after the database write, inconsistencies can arise.

1.2 Deleting Cache First

If Redis cache is deleted before MySQL is updated, another thread may read stale data and repopulate the cache with dirty data.

Subsequent database updates then cause MySQL and Redis to diverge.

1.3 Deleting Cache After Write

If the database write fails after the cache has been deleted, the stale cache remains and leads to inconsistency.

Concurrent reads and writes cannot guarantee order, causing mismatched data.

2. Solutions

2.1 Delayed Double‑Delete Strategy

2.1.1 Basic Idea

Perform redis.del(key) both before and after the database write, with a reasonable timeout between the two deletions.

Pseudo‑code:

public void write(String key, Object data) {
    redis.delKey(key);
    db.updateData(data);
    Thread.sleep(500);
    redis.delKey(key);
}

2.1.2 Specific Steps

Delete the cache.

Write to the database.

Sleep for a configurable number of milliseconds (e.g., 500 ms or 1 s) based on business latency.

Delete the cache again.

2.1.3 Setting Cache Expiration

Assigning an expiration time to cache entries ensures eventual consistency: the cache will be refreshed from the database after expiration.

2.1.4 Drawbacks

The worst‑case scenario is temporary inconsistency during the expiration window and added latency for write operations.

2.1.5 Additional Considerations

If cache deletion fails, a retry mechanism via a message queue (e.g., Kafka, RabbitMQ) can be employed, forming a “max‑effort notification” pattern to achieve eventual consistency.

2.2 Asynchronous Cache Update via MySQL Binlog

2.2.1 Overall Idea

Capture data‑changing operations using MySQL binlog.

Publish binlog events to a message queue.

Consume the events and update Redis accordingly.

Read operations always hit Redis (hot data). Write operations modify MySQL, and the binlog‑driven updates keep Redis in sync.

2.2.2 Redis Update Process

Updates can be full (bulk load) or incremental (real‑time). Incremental updates listen to binlog events (INSERT, UPDATE, DELETE) and push changes to Redis via the message queue.

3. Summary

In high‑concurrency applications where strong data consistency is required, identify the root causes of MySQL‑Redis inconsistency and apply either the delayed double‑delete strategy or asynchronous binlog‑driven cache updates, possibly combined with message‑queue‑based retry mechanisms to achieve eventual consistency.

Redishigh concurrencyMySQLbinlogCache ConsistencyAsync Updatedouble-delete
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.