Backend Development 11 min read

Cache Consistency Strategies: Cache‑Aside Pattern, Deleting vs. Updating Cache, and Queue‑Based Solutions for High Concurrency

The article explains how distributed cache‑aside patterns work, why deleting stale cache entries is often preferable to updating them, analyzes basic and complex cache‑database inconsistency scenarios, and proposes a JVM‑queue‑driven, single‑threaded update mechanism with practical considerations for high‑concurrency environments.

Top Architect
Top Architect
Top Architect
Cache Consistency Strategies: Cache‑Aside Pattern, Deleting vs. Updating Cache, and Queue‑Based Solutions for High Concurrency

Cache Aside Pattern

The classic cache‑plus‑database read/write model is the Cache Aside Pattern: on a read, the cache is checked first; if missing, the database is queried, the result is stored in the cache, and the response is returned. On an update, the database is written first and then the cache entry is deleted.

Why Delete the Cache Instead of Updating It?

In many complex scenarios the cached value is not a direct copy of a single table column but the result of calculations involving multiple tables; recomputing the cache on every write can be expensive, especially when the cache is rarely read.

Deleting the cache implements a lazy‑computation approach: the cache is rebuilt only when a subsequent read actually needs the data, reducing unnecessary work.

Basic Cache‑Inconsistency Problem and Solution

If the cache is deleted after the database update and the delete fails, the cache holds stale data, causing inconsistency.

Solution: delete the cache *before* updating the database. If the database update later fails, the cache remains empty, and reads will fall back to the old database value and repopulate the cache safely.

Analysis of More Complex Inconsistency Scenarios

When a write deletes the cache, then a concurrent read occurs before the database update finishes, the read may fetch stale data from the database and repopulate the cache, leading to inconsistency once the write finally commits.

Under high‑traffic, concurrent read‑write workloads, such race conditions become likely.

Proposed Solution

When updating data, route the operation (identified by a unique key) to an internal JVM queue. On a read, if the cache is missing, enqueue a cache‑refresh request for the same key.

Each queue is serviced by a single worker thread that processes operations sequentially: delete the cache, then update the database, then later read the latest value and write it back to the cache.

The queue can filter duplicate cache‑refresh requests to avoid redundant work.

If a read request waits too long, it can either keep polling for the refreshed value or fall back to reading the current (possibly stale) database value after a timeout.

High‑Concurrency Considerations

1. Read request blocking : Ensure read timeouts are respected; excessive queue buildup can cause many reads to time out and hit the database directly.

2. Read request volume : Perform load testing to verify the system can handle spikes without excessive latency.

3. Routing to the same service instance : For a given data item, all update and cache‑refresh operations should be routed to the same instance (e.g., via Nginx hash routing) to preserve ordering.

4. Hot‑item skew : Hot keys may overload a single queue/instance; consider sharding queues or scaling out instances to distribute load.

Rough capacity estimation shows that a single machine handling a few hundred writes per second with modest queue depths can keep read latency under 200 ms.

Overall, the queue‑based lazy‑refresh approach provides a practical way to maintain cache‑database consistency while minimizing unnecessary cache updates in high‑throughput backend systems.

Backenddistributed systemsCachehigh concurrencyconsistencycache asideQueue
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.