Databases 10 min read

Ensuring Data Consistency Between Database and Redis Cache in High-Concurrency Scenarios

This article analyzes data consistency challenges between databases and Redis caches in high‑traffic applications, examines write order pitfalls and concurrency issues, and presents the Cache‑Aside pattern with retry and expiration strategies to achieve eventual consistency.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Ensuring Data Consistency Between Database and Redis Cache in High-Concurrency Scenarios

In many real projects, caches are introduced to alleviate database query pressure in high‑QPS scenarios, trading memory space for query efficiency. However, once a cache is added, keeping data consistent between the cache and the database becomes a critical problem.

Initially, when user numbers and traffic are low, the service can read and write directly to the database, which seems sufficient. As traffic grows, database latency becomes a bottleneck. Adding more database instances can distribute load, but this approach raises hardware and operational costs and does not scale indefinitely.

Introducing a cache (e.g., Redis) allows the service to read from memory first, dramatically speeding up access. Yet this creates a new consistency challenge: should the database be updated before the cache, or vice‑versa?

Write database then cache: If the database write succeeds but the cache update fails, the cache holds stale data, leading to incorrect query results.

Write cache then database: If the cache update succeeds but the database write fails, the cache contains the latest value while the database remains outdated; once the cache expires, stale data will be read from the database.

In high‑concurrency environments, the problem worsens. For example, two threads may update a product’s inventory: Thread 1 writes the database (value 1) then the cache, while Thread 2 writes the database (value 3) and updates the cache first. The final cache may hold the older value (1) while the database holds the newer value (3), causing inconsistency.

To address these issues, the classic Cache‑Aside Pattern is recommended:

Read from the cache first; if the data exists, return it.

If the cache miss occurs, fetch the data from the database and populate the cache.

When updating data, update the database first, then delete the corresponding cache entry.

This approach ensures that stale cache entries are removed, forcing the next read to retrieve fresh data from the database and repopulate the cache, effectively achieving eventual consistency.

Ensuring the second step (cache deletion) succeeds can be handled by implementing retry mechanisms. Simple retries may still fail, so an asynchronous retry using a message queue is advisable, allowing a consumer to retry the operation without blocking client requests.

Additionally, setting appropriate cache expiration times helps mitigate inconsistency: data with low access frequency can expire naturally, freeing memory and providing a fallback to the database for fresh data.

While strict strong consistency is costly and may degrade performance, tolerating occasional inconsistency with robust retry, expiration, and cache‑aside strategies balances performance and data correctness.

In summary, the article examines database‑Redis consistency problems, especially under high concurrency, and proposes the Cache‑Aside pattern combined with retries and expiration to achieve eventual consistency.

DatabaseRedishigh concurrencyCache Consistencycache asideeventual-consistency
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.