Backend Development 6 min read

Ensuring Redis‑MySQL Consistency: Strategies and Best Practices

To maintain data consistency between Redis caches and MySQL databases, this article examines common pitfalls and presents three robust solutions—deleting cache before DB writes, updating the DB then removing cache, and implementing delete‑retry mechanisms—plus optional locking for strong consistency.

Lobster Programming
Lobster Programming
Lobster Programming
Ensuring Redis‑MySQL Consistency: Strategies and Best Practices

In our projects we cache frequently accessed (hot) data in Redis to improve system throughput.

When data is modified, inconsistencies can arise between the database and Redis, making consistency crucial. Below are three approaches to guarantee double‑write consistency.

1. Delete Cache Before Operating on the Database

Updating data directly in Redis is not recommended; instead, delete the cached entry because deletion is faster. The process is illustrated below.

This method can still cause inconsistency, as shown:

Scenario: Thread 1 deletes the Redis entry, then stalls before updating the DB; Thread 2 reads from the DB and repopulates Redis with stale data; later Thread 1 updates the DB, leaving Redis with old data.

To mitigate this, we adopt a delayed double‑delete strategy.

The delayed double‑delete ensures that data read within X ms may be stale, while reads after X ms are fresh. X is determined by business requirements.

Note: This approach guarantees eventual consistency but not strong consistency. To achieve strong consistency, a lock must be introduced, as illustrated:

Locking reduces throughput, so its use should be weighed against consistency requirements.

2. Operate on the Database First, Then Delete Cache

In this scheme, the DB update succeeds before the Redis entry is removed. Other threads may read stale data until the cache is cleared and refreshed, providing eventual consistency. In extreme cases, inconsistency can still occur:

Scenario: Thread 1 updates the DB but fails to delete the Redis entry; Thread 2 reads the stale cache and returns old data until the cache expires.

3. Delete‑Retry Mechanism

Both previous methods can suffer from cache‑deletion failures. A retry mechanism addresses this issue.

Using Canal to monitor binlog changes, the client attempts to delete the Redis entry; if it fails, an MQ message triggers a retry, ensuring eventual consistency at the cost of added complexity.

Conclusion

In practice, updating the database first and then deleting the cache is recommended for simplicity and higher consistency.

Regardless of the order, a delete‑retry mechanism should be implemented to handle deletion failures.

For strong consistency between Redis and MySQL, consider adding a locking layer.

BackendRedisMySQLCache Consistencydata synchronization
Lobster Programming
Written by

Lobster Programming

Sharing insights on technical analysis and exchange, making life better through technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.