Cache Consistency Between MySQL and Redis: Design Patterns and Best Practices
This article explains the relationship between MySQL and Redis, discusses why cache consistency is challenging, and details four cache update design patterns—delete‑then‑update, update‑then‑invalidate, read/write‑through, and write‑behind—along with their advantages, drawbacks, and typical execution flows.
MySQL is a relational database used for persisting data and ensuring reliability, while Redis serves as an in‑memory cache to improve data access performance.
Ensuring data consistency between MySQL and Redis—known as the cache consistency problem—is a classic challenge because keeping the cache and the database perfectly synchronized in real time is difficult.
In practice, systems aim to keep the data consistent for the majority of the time and eventually reach consistency, rather than guaranteeing instantaneous strong consistency.
How Cache Inconsistency Occurs
If data never changes, cache inconsistency does not arise. Inconsistency typically appears when data is modified, as updates must be applied to both the database and the cache, which are separate systems and cannot be atomically updated, leading to a time gap. Concurrent reads and writes can exacerbate this issue.
Cache Update Design Patterns
Four common cache update designs are:
Delete the cache first, then update the database (prone to long‑lasting stale data under concurrency, not recommended).
Update the database first, then delete the cache (Cache‑Aside Pattern).
Update only the cache; the cache synchronously updates the database (Read/Write‑Through Pattern).
Update only the cache; the cache asynchronously updates the database (Write‑Behind Cache Pattern).
Below are detailed explanations of each method.
Delete Cache First, Then Update Database
This approach can cause cache inconsistency under concurrent read/write scenarios.
Typical execution flow:
Client 1 triggers an update for data A.
Client 2 triggers a query for data A.
Client 1 deletes data A from the cache.
Client 2 queries the cache for data A and misses.
Client 2 reads data A from the database and writes it to the cache.
Client 1 updates data A in the database.
As a result, the cache holds stale data A while the database contains the updated value, leading to inconsistency; therefore this method is generally discouraged.
Update Database First, Then Invalidate Cache (Cache‑Aside)
This method may still cause a brief period of inconsistency under concurrency.
Typical execution flow:
Client 1 updates data A in the database.
Client 2 queries the cache for data A and receives the old value (cache hit).
Client 1 invalidates the cache entry for data A.
Client 3 later queries the cache, misses, reads from the database, and repopulates the cache.
Eventually the cache and database become consistent; the window of inconsistency is short and acceptable for many applications.
Update Only Cache, Synchronously Sync to Database (Read/Write‑Through)
In this pattern, the business updates the cache, and the cache itself writes the change to the database before returning a result.
Typical execution flow:
Client 1 updates data A in the cache; the cache synchronously updates the database and returns.
Client 2 queries the cache and receives the fresh value.
Read‑Through and Write‑Through behave similarly for reads: if the cache entry is missing or evicted, the cache fetches the data from the database, stores it, and returns it. This approach yields very low inconsistency probability but requires cache modifications.
Update Only Cache, Asynchronously Sync to Database (Write‑Behind)
Here the business updates only the cache, and the cache later propagates the change to the database asynchronously.
Typical execution flow:
Client 1 updates data A in the cache and returns immediately.
Client 2 queries the cache and receives the updated value.
The cache asynchronously writes data A to the database.
This pattern offers excellent read/write performance because the client receives a response after the in‑memory operation, but it sacrifices strong consistency; data loss can occur if the cache crashes before persisting to the database.
Summary
The presented cache update designs are collective wisdom from prior practitioners, each with its own trade‑offs and no perfect solution. When designing systems, aim for a balanced approach that fits your specific business scenario rather than chasing an unattainable ideal.
Source: developer.jdcloud.com/article/2776
Backend Community Invitation
We welcome developers, technical recruiters, and anyone interested in sharing job referrals to join our high‑quality technical exchange group.
Maintain civil discussion focusing on technology exchange , job referrals , and industry topics .
Advertisements are prohibited; beware of private scams.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.