Choosing the Right Cache Update Pattern: Cache‑Aside, Write‑Back, Read‑Through

This article systematically compares four cache update patterns—cache‑aside, asynchronous write‑back, read/write‑through, and ongoing optimizations—detailing their implementation steps, suitable scenarios, advantages, drawbacks, and practical tips such as delayed double deletion and proactive cache refreshing to balance performance and consistency.

Java Baker
Java Baker
Java Baker
Choosing the Right Cache Update Pattern: Cache‑Aside, Write‑Back, Read‑Through

Hello, I am the Java baker. How to update cache and DB while balancing performance and consistency is a common topic; below I systematically summarize cache update patterns.

1. Cache‑aside

Implementation

Query: first check cache; if miss, query DB, write result to cache with appropriate TTL.

Update: first update DB, then delete cache; for extreme cases introduce delayed double‑delete.

The reason we do not delete cache before updating DB is that concurrent queries could write stale DB values back to cache. The reason we do not update cache immediately after DB is that DB and cache writes cannot guarantee consistency and may write old data due to concurrent write ordering.

Delayed double‑delete is needed because in extreme cases a read thread may write an old DB value to cache. It requires that the cache has expired, the read thread first reads the old DB value, and after the write thread updates DB and deletes cache, the read thread then writes the old DB value into cache.

Therefore, after the first cache deletion, delaying a short period before the second deletion ensures eventual consistency between cache and DB. The following diagram shows the cache‑aside architecture with delayed double‑delete.

Cache‑aside query scenario:

Cache‑aside update scenario:

Applicable scenarios

Most scenarios

Advantages

When data volume is large, can load on demand into cache

Disadvantages

If a hotspot key expires, many requests bypass cache and hit DB, causing CPU spikes

Cache‑aside optimization: proactive pre‑refresh

To solve hotspot expiration, set a relatively long TTL and proactively refresh hotspot keys.

Based on data size

If data is large, whitelist hotspot keys; better to auto‑discover and update the whitelist.

If data is small, consider loading everything into cache permanently, e.g., global configuration data.

Based on refresh trigger

Scheduled pull: program periodically checks DB for whitelist keys and updates cache.

Heterogeneous data: listen to MySQL changes and trigger cache update on DB change.

The heterogeneous data approach is preferred because cache updates are timely and can be made generic without extra development.

2. Asynchronous write‑back (write‑back)

Implementation

Query: only read from cache.

Update: write to cache, then send a message or use a scheduled task to asynchronously write to DB.

Applicable scenarios

High QPS, extremely hot data, prioritize performance.

Examples:

Counting statistics: pages that continuously refresh visit counts.

Hot product inventory deduction: Redis decrements stock, then asynchronously persists to DB.

Advantages

Supports high QPS and hotspot scenarios.

Disadvantages

Temporary inconsistency between cache and DB; requires message trigger or fallback scheduled task.

3. Read/Write‑through

Implementation

Both cache‑aside and write‑back require the application to control cache and DB reads/writes. Read/Write‑through delegates this control to the underlying storage service, which maintains cache and persistence, so the application need not be aware. However, it heavily depends on the reliability of the storage service and is less common in practice.

4. Continuous optimization

Think of it as building with Lego; optimize according to actual conditions.

Multi‑level cache

Add a local cache such as Caffeine.

Add heterogeneous data beyond DB, e.g., HBase, Elasticsearch, to query when cache miss before falling back to DB.

Logical expiration

Design architecture to fit business, e.g., use logical expiration to avoid cache stampede: record business start/end times, set a slightly longer TTL with random offsets.

Strong consistency scenarios

When strong consistency is required, query DB directly and ignore cache, e.g., checking price during order placement.

Consider RocksDB instead of Redis

RocksDB is a persistent database with built‑in cache; worth a dedicated article.

Conclusion

For most cases, use cache‑aside; further proactively refresh hotspot keys via DB change listeners or scheduled refresh.

For high QPS and hot keys, use asynchronous write‑back, accepting short‑term inconsistency.

Continuously optimize: add multi‑level cache, heterogeneous data, logical expiration, and use DB‑only reads for strong consistency.

Further reading: "Architecture Essentials: Local Cache Principles and Applications" and "Spring Cache Source Code Analysis".

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

OptimizationcachingConsistencycache-asideread-throughwrite back
Java Baker
Written by

Java Baker

Java architect and Raspberry Pi enthusiast, dedicated to writing high-quality technical articles; the same name is used across major platforms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.