Local and Distributed Caching: Concepts and Implementations
In high‑traffic e‑commerce systems, caching—ranging from simple in‑JVM HashMap caches to Guava, Caffeine, and Redis distributed stores—reduces latency by applying eviction policies such as FIFO, LRU, LFU, or W‑TinyLFU, while employing consistency strategies like expiration, write‑through, and cache‑aside to mitigate breakdown, avalanche, and penetration issues.
In high‑traffic e‑commerce systems, response time is critical, so caching is widely used to reduce database access and improve latency. However, cache design introduces complexity in read/write strategies and eviction policies.
Cache eviction algorithms
Common eviction policies include:
FIFO (First‑In‑First‑Out): removes the oldest entry.
LFU (Least Frequently Used): removes entries with the lowest hit count.
LRU (Least Recently Used): removes the least recently accessed entry.
Local cache
Implemented inside the JVM (e.g., using HashMap ), it offers fast access but limited capacity and no sharing across processes.
private static Map
CACHE_MAP;
public static Object getCacheMapValue(String key){
Object value = getCacheMap().get(key);
return value == null ? null : value ;
}
public static void putCache(String key,Object value){
getCacheMap().put(key,value);
}
public static void removeCache(String key){
getCacheMap().remove(key);
}
public static Map
getCacheMap(){
if(CACHE_MAP == null ){
CACHE_MAP = new HashMap<>();
}
return CACHE_MAP;
}
public static void main(String[] args){
putCache("1","test1");
putCache("2","test2");
putCache("3","test3");
System.out.println(getCacheMapValue("1")); // test1
removeCache("1");
System.out.println(getCacheMapValue("1")); // null
}GuavaCache
Guava provides a builder‑style cache with optional expiration, maximum size, and concurrency settings. It uses ConcurrentHashMap internally and supports three eviction strategies (size‑based, time‑based, reference‑based).
Cache
cache = CacheBuilder.newBuilder().build();
cache.put("word","Hello World");
System.out.println(cache.getIfPresent("word"));
Cache
cache1 = CacheBuilder.newBuilder()
.concurrencyLevel(8)
.expireAfterWrite(10, TimeUnit.SECONDS)
.initialCapacity(10)
.maximumSize(15)
.build();When the maximum size is exceeded, Guava falls back to LRU eviction.
Caffeine
Caffeine is a Java‑8 rewrite of Guava’s cache with higher performance and W‑TinyLFU eviction (a hybrid of LFU and LRU). It offers both Cache and LoadingCache APIs.
// automatic loading
LoadingCache
cache = Caffeine.newBuilder()
.build(key -> load());
System.out.println(cache.get("key1")); // value from loader
// manual loading
Cache
cache2 = Caffeine.newBuilder().build();
Integer age = cache2.get("ZhangSan", k -> 18);
System.out.println(age);
// async loading
AsyncCache
async = Caffeine.newBuilder().buildAsync();
CompletableFuture
future = async.get("ZhangSan", k -> 18);
System.out.println(future.get());Distributed cache (Redis)
Redis is a standalone, single‑threaded KV store that can be clustered for high availability. It supports persistence via RDB snapshots and AOF logs.
redisTemplate.opsForValue(); // string ops
redisTemplate.opsForHash(); // hash ops
redisTemplate.opsForList(); // list ops
redisTemplate.opsForSet(); // set ops
redisTemplate.opsForZSet(); // sorted set opsTypical Redis service implementation in Spring:
@Service
public class RedisServiceImpl implements CacheService {
@Resource
private RedisTemplate
redisTemplate;
@Override
public String getFromString(String key){
return redisTemplate.opsForValue().get(key);
}
@Override
public void setString(String key, String value, Long timeout){
redisTemplate.opsForValue().set(key, value, timeout, TimeUnit.SECONDS);
}
@Override
public boolean delString(String key){
return redisTemplate.delete(key);
}
// hash operations, lock utilities, etc.
}Cache consistency
Setting an expiration time on cache entries is a simple way to achieve eventual consistency between DB and cache. When expiration is not used, common patterns include write‑through, write‑behind, and cache‑aside (delete‑then‑write).
Common cache problems
Cache breakdown (penetration) : Hot key misses cause massive DB traffic. Mitigation: keep hot data forever or use a distributed lock.
Cache avalanche : Many keys expire simultaneously, overwhelming DB. Mitigation: stagger expirations with random offsets, use clustering, or multi‑level caches.
Cache penetration : Requests for non‑existent keys hit DB repeatedly. Mitigation: input validation, Bloom filters, or cache a null placeholder.
Conclusion
Cache strategies must be chosen based on specific business scenarios; there is no one‑size‑fits‑all solution.
DeWu Technology
A platform for sharing and discussing tech knowledge, guiding you toward the cloud of technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.