Backend Development 16 min read

Ensuring Data Consistency Between MySQL and Redis: Strategies for Single‑Threaded and Multi‑Threaded Scenarios

This article explains what data consistency means for MySQL and Redis, analyzes inconsistency cases in both single‑threaded and concurrent environments, and proposes practical strategies—including read‑only and read‑write cache handling, message‑queue retries, binlog subscription, delayed double‑delete, and distributed locking—to achieve eventual or strong consistency.

High Availability Architecture
High Availability Architecture
High Availability Architecture
Ensuring Data Consistency Between MySQL and Redis: Strategies for Single‑Threaded and Multi‑Threaded Scenarios

Data consistency generally means that the value stored in the cache equals the value stored in the database; it can be achieved either when the cache already contains the data and matches the DB, or when the cache is empty and the DB holds the latest value.

Inconsistency occurs when the cache value differs from the DB value, or when stale values exist in either layer, causing other threads to read outdated data.

Caches can be classified as read‑only (only read operations, using an "update DB + delete cache" strategy) or read‑write (supporting CRUD operations, using an "update DB + update cache" strategy).

(1) Read‑only cache strategy (update DB + delete cache)

When adding new data, write directly to the DB; the cache will miss and later be populated from the DB, keeping consistency. When updating or deleting data, the order of updating the DB and deleting the cache can cause inconsistency:

Delete cache first, then update DB – a concurrent read may hit the DB before the DB update, leading to stale cache.

Update DB first, then delete cache – a concurrent read may still hit the old cache value.

To mitigate these issues, the article suggests using a message queue with asynchronous retry:

redis.delKey(X) db.update(X) Thread.sleep(N) redis.delKey(X)

The steps are: store the delete or update request in a queue (e.g., Kafka), attempt the operation, and if it fails, retry until success or a max retry count, then remove the message from the queue.

Another approach is to subscribe to MySQL binlog changes (using tools like Canal), push the changes to a queue, and asynchronously update or delete the Redis cache.

(2) Read‑write cache strategy (update DB + update cache)

Two main write‑back methods are discussed:

Synchronous write‑through : use a transaction to update both DB and cache atomically, with retry on failure.

Asynchronous write‑back : write only to the cache and defer DB persistence until cache eviction, which risks data loss if the cache crashes.

For strong consistency, the article recommends synchronous write‑through combined with distributed locking to serialize updates.

High‑concurrency scenarios

When many threads operate concurrently, the following techniques are recommended:

Set cache expiration time and use delayed double‑delete to reduce the window of inconsistency.

Introduce a short sleep after DB update to allow pending reads to fetch the latest DB value before the cache is deleted.

Use delayed messages instead of sleep when precise timing is hard.

Apply distributed locks (e.g., Redisson read/write locks) around the "update DB + delete cache" or "update DB + update cache" sequence to ensure only one thread modifies a given key at a time.

Sample Java code for lock‑based operations:

public void write() { Lock writeLock = redis.getWriteLock(lockKey); writeLock.lock(); try { redis.delete(key); db.update(record); } finally { writeLock.unlock(); } } public void read() { if (caching) { return; } Lock readLock = redis.getReadLock(lockKey); readLock.lock(); try { record = db.get(); } finally { readLock.unlock(); } redis.set(key, record); }

Strong consistency alternatives

Protocols such as 2PC, 3PC, Paxos, or Raft can provide strong consistency but incur high latency and complexity. Practical alternatives include:

Temporarily store concurrent read requests during a DB update, then serve them after the cache is refreshed.

Serialize all read/write operations through a single‑threaded worker queue.

Bind requests to specific service or DB connections using consistent hashing to ensure ordering.

Use Redis distributed read/write locks to make cache eviction and DB update mutually exclusive.

Additional considerations

Key‑value size should stay below 1 KB to fit within a single MTU packet for optimal performance. Hot keys should be sharded using hashtags or replicated. Prevent cache‑related failures (cache penetration, thundering herd, snowball) with empty‑result caching, Bloom filters, and proper expiration policies.

Conclusion

For read‑write caches, prefer synchronous write‑through (DB + cache) with distributed locks; for read‑only caches, use "update DB + delete cache" and favor the "update DB first" order, combined with message‑queue retries and optional binlog subscription. Adjust the strategy based on consistency requirements, traffic patterns, and hot‑key characteristics.

Author Bio

Xu Xin – Backend Development Engineer at Tencent CSIG Smart Retail R&D Center, MSc graduate of Central South University, responsible for backend development of Tencent Smart Retail services.

ConcurrencyRedisCachingdata consistencyMySQLMessage QueueDistributed Locks
High Availability Architecture
Written by

High Availability Architecture

Official account for High Availability Architecture.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.