Backend Development 15 min read

Cache Read/Write Strategies: Cache Aside, Read/Write Through, and Write Back

This article explains common cache read/write strategies—including Cache Aside, Read/Write Through, and Write Back—detailing their mechanisms, advantages, drawbacks, and suitable scenarios to help developers choose the appropriate approach for different backend workloads.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Cache Read/Write Strategies: Cache Aside, Read/Write Through, and Write Back

Hello everyone, I'm Chen.

Cache read/write may seem simple—read from cache first, fall back to the database on miss, then write back to cache—but different business scenarios require different strategies.

When selecting a strategy, consider factors such as dirty data, performance, and cache hit rate.

Using a standard "cache + database" setup, we will analyze classic cache strategies and their applicable scenarios.

Free books at the end of the article.

Cache Aside (旁路缓存) Strategy

Consider a simple e‑commerce user table with ID and Age . The cache stores age by ID. To change user 1's age from 19 to 20, a naive approach updates the database then the cache.

This can cause inconsistency: concurrent updates may write different values to the database and cache, leading to mismatched data.

Directly updating the cache also risks lost updates when multiple requests read‑modify‑write the same cached value.

A common solution is to delete the cache entry after updating the database; on the next read, the cache is repopulated from the database.

The Cache Aside strategy treats the database as the source of truth and loads data into the cache on demand. Its read steps are:

Read from cache.

If hit, return the data.

If miss, query the database.

Write the result to the cache and return it.

The write steps are:

Update the database record.

Delete the corresponding cache entry.

Deleting the cache before updating the database can also cause inconsistency, as illustrated with concurrent read/write scenarios.

Although Cache Aside is widely used, it may suffer from reduced hit rates under heavy write workloads. To mitigate this, you can either update the cache together with the database while holding a distributed lock, or set a short TTL on the cache entry so stale data expires quickly.

Read/Write Through Strategy

This strategy makes the cache the sole interface for applications; the cache synchronizes reads and writes with the database.

Write‑Through works as follows: if the key exists in the cache, update it and let the cache propagate the change to the database; if the key is missing (Write Miss), either allocate space in the cache (Write Allocate) or write directly to the database (No‑write Allocate). Typically No‑write Allocate is chosen to avoid an extra cache write.

Read‑Through simply checks the cache first; on a miss, the cache loads the data from the database and returns it.

Read/Write Through is less common with distributed caches like Redis or Memcached because they do not automatically sync with databases, but it is useful for local caches such as Guava's LoadingCache.

Because Write‑Through writes to the database synchronously, it can impact performance; an asynchronous alternative is the Write‑Back strategy.

Write Back Strategy

Write‑Back writes only to the cache and marks the cache line as dirty; the dirty data is written back to the backing store when the line is evicted or explicitly flushed.

In Write‑Miss cases, Write Allocate is used: the data is written to both cache and storage.

Read operations check the cache; on a miss, they locate a cache block. If the block is dirty, it is flushed to storage before loading fresh data. After loading, the block is marked clean and the data is returned.

Write‑Back is more common in operating system page caches, log buffering, and message‑queue persistence, where the performance benefit outweighs the risk of data loss on power failure.

When using Write‑Back, consider buffering data in memory for a short period before flushing to slower storage, such as aggregating request latency metrics before writing logs.

Summary

This article covered several cache strategies and their suitable scenarios:

Cache Aside – the most common for distributed caches; update DB then delete cache.

Read/Write Through and Write Back – require cache components that support synchronization, useful for custom local caches.

Write Back – a classic computer‑architecture technique; write‑only to cache and asynchronously persist.

End‑of‑Article Giveaway

Two copies of "Large‑Scale Website Architecture in Practice" are offered for free. To enter, comment on the article; two random commenters will receive a book. Adding the author’s WeChat also qualifies for an extra giveaway.

Deadline: 2022/03/31 08:00.

BackendPerformanceCachingcache asideread-throughwrite-back
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.