Mastering Cache Strategies: When to Use LRU, LFU, and Consistency Techniques
This article explains why caching is essential for high‑performance data retrieval, compares LRU and LFU eviction policies, presents three Redis‑based cache implementations, and discusses consistency challenges and solutions such as eviction ordering, consistent hashing, and delayed eviction in distributed systems.
Cache Strategies Overview
When data volume grows, retrieval performance degrades and hot‑cold data distribution becomes uneven. Introducing a cache such as Redis can alleviate latency, but choosing the right eviction policy and handling consistency are critical.
01 Cache Policies
LRU (Least Recently Used) : evicts items that have not been accessed recently. Works well for hot data but can suffer when batch operations cause sudden drops in hit rate.
LFU (Least Frequently Used) : evicts items with the lowest access frequency. It tracks a counter for each key, increments on hit, and periodically removes low‑frequency entries.
Choosing a policy depends on business characteristics; the article illustrates three Redis‑based implementations (A, B, C).
Solution A
Read‑through: on a cache miss, read from the database, write the result to the cache with an expiration time, and serve it. This approach may suffer a “first‑access” penalty but yields high hit rates for stable hot data.
Solution B
Instead of updating the cache on write, evict the cached entry and let the next read repopulate it. An asynchronous update path can also be used.
Solution C
Introduce an asynchronous module: on write, evict the old cache entry; on a cache miss, send a message to a queue, and let a worker write the data into the cache.
02 Cache Consistency Issues
Non‑atomic operations between cache and database can lead to stale or dirty data. To guarantee eventual consistency, the recommended order is to evict the cache first, then write to the database.
In distributed environments, concurrent reads and writes can still cause inconsistency. One approach is to use consistent hashing to serialize reads and writes for the same key, achieving local serialization.
Another approach is “delayed eviction”: set a TTL on the cache entry (e.g., 5 seconds) instead of immediate eviction, allowing pending database writes to complete before the stale entry expires.
Both techniques address cache‑database inconsistency in distributed deployments without significant cost.
Baidu Maps Tech Team
Want to see the Baidu Maps team's technical insights, learn how top engineers tackle tough problems, or join the team? Follow the Baidu Maps Tech Team to get the answers you need.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
