Cache Consistency Strategies and Best Practices
The article explains how using a Redis cache can boost read performance but introduces consistency challenges, compares four cache‑aside write strategies, and recommends the reliable “update database then delete cache” approach combined with short expiration times and asynchronous message‑queue invalidation to keep data fresh.
This article explains why using a cache (typically Redis) can greatly improve read performance, but also introduces the consistency challenges that arise when the cache and the underlying database (MySQL) diverge.
Cache Basics
Cache is a technique of using faster storage (memory) to replace slower database reads. Local memory caches (e.g., in‑process L1/L2 caches) and remote caches (Redis) both help reduce latency, but they also introduce multiple copies of the same data.
Consistency Problem under Cache‑Aside
When data is updated in MySQL, there is no transaction that guarantees the corresponding Redis entry is updated at the same time. This creates a time window where the cache holds stale data. The article shows a diagram of this window and explains that eliminating it completely would require costly distributed transactions, which defeats the purpose of caching.
Even if we aim for eventual consistency, the window should be as short as possible (ideally <1 ms).
Typical Cache‑Aside Read Logic
data = queryDataRedis(key);
if (data == null) {
data = queryDataMySQL(key); // cache miss, read from DB
if (data != null) {
updateRedis(key, data); // populate cache
}
}This logic is correct for reads, but consistency issues mainly appear during writes.
Four Write Strategies
Update DB then update cache
Update cache then update DB
Update DB then delete cache
Update DB then delete cache (the most reliable)
Each strategy is examined with concrete thread‑interleaving examples that show how stale data can appear. For instance, when two threads update the same record, the order of DB updates and cache updates may differ, leading to inconsistent values (e.g., DB = 98, cache = 99).
Strategy Details
Update DB → Update Cache : In write‑write concurrency the cache may be updated out of order; a distributed lock can mitigate the problem but adds overhead.
Update Cache → Update DB : If the DB update fails after the cache has been changed, the cache becomes permanently dirty.
Delete Cache → Update DB : Deleting the cache before the DB write can cause a read thread to repopulate the cache with stale data.
Update DB → Delete Cache : This approach limits the inconsistency window to the short period between DB commit and cache deletion. In most cases the window is negligible (≈1 ms) and can be ignored.
Final Consistency Guarantees
To bound the inconsistency period, set an expiration time on cache entries (e.g., 1 minute). Even if a cache update fails, the entry will eventually expire and be refreshed from the DB.
For more robust guarantees, use a reliable message queue (MQ) with at‑least‑once delivery to asynchronously delete or update cache keys. Transactional MQ (e.g., RocketMQ) or a “message table” pattern can ensure the cache‑invalidating message is persisted together with the DB transaction.
Handling Complex Multi‑Cache Scenarios
When a single DB record affects multiple cache keys (e.g., user profile, leaderboard, daily stats), the article recommends publishing an MQ event after the DB update and letting dedicated services subscribe to maintain each key. Alternatively, subscribe to MySQL binlog (using tools like Canal) to detect changes and trigger cache updates centrally.
Conclusion
For read‑heavy workloads, the "update DB then delete cache" strategy offers the best trade‑off between performance and consistency. For write‑heavy or read‑write balanced workloads, "update DB then update cache" may be preferable, provided proper locking or MQ mechanisms are in place.
Tencent Cloud Developer
Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.