Understanding In-Memory Caching with Guava LoadingCache and LRU Implementation in Java
This article explains the fundamentals of in‑memory caching, compares it with buffering, introduces Guava's LoadingCache configuration and operations, discusses eviction strategies, illustrates common cache algorithms (FIFO, LRU, LFU), provides a simple LRU implementation using LinkedHashMap, and offers practical guidelines for when and how to apply caching to improve backend performance.
In the introductory section the author contrasts buffering with caching, noting that caching is a widely used optimization technique that bridges components with vastly different speeds, such as CPU caches and Redis‑style distributed caches.
The article then focuses on in‑process (heap) caches in Java, mentioning popular implementations like Ehcache, JCache, Caffeine, and Guava. Guava's LoadingCache is highlighted as a convenient heap‑cache solution.
Guava LoadingCache configuration
Typical Maven dependency:
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>31.1-jre</version>
</dependency>Key parameters include maximumSize (capacity limit), initialCapacity , and concurrencyLevel . Two ways to populate the cache are demonstrated:
Manual put of objects retrieved from a database.
Automatic loading via a CacheLoader implementation, which lazily loads values on first access.
Example of a loading cache with lazy loading:
public static void main(String[] args) {
LoadingCache
lc = CacheBuilder
.newBuilder()
.build(new CacheLoader
() {
@Override
public String load(String key) throws Exception {
return slowMethod(key);
}
});
}
static String slowMethod(String key) throws Exception {
Thread.sleep(1000);
return key + ".result";
}Removal listeners can be attached with .removalListener(notification -> System.out.println(notification)) to monitor evictions.
Eviction strategies
Capacity‑based eviction using LRU (least‑recently‑used) when the cache is full.
Time‑based eviction via expireAfterWrite or expireAfterAccess .
JVM‑GC‑based eviction using weak/soft references ( weakKeys , weakValues ).
The article warns about memory pitfalls when cache entries are large, describing how excessive heap caching can trigger frequent GC cycles and degrade performance.
Cache algorithms
Three classic algorithms are described:
FIFO – removes the oldest inserted entry.
LRU – removes the least recently used entry (most common).
LFU – removes the least frequently used entry, breaking ties with recency.
For a hands‑on LRU example, the author shows a minimal implementation using LinkedHashMap :
public class LRU extends LinkedHashMap {
int capacity;
public LRU(int capacity) {
super(16, 0.75f, true);
this.capacity = capacity;
}
@Override
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > capacity;
}
}Although not thread‑safe, this snippet illustrates the core idea of LRU via accessOrder=true and overriding removeEldestEntry .
The discussion then shifts to operating‑system level caching (the "cached" column in free ), readahead prefetching, and how file‑system caches accelerate I/O.
When to use a cache
Data exhibits hot‑spot access patterns.
Read‑heavy workloads dominate writes.
Downstream services have limited capacity.
Introducing a cache does not compromise correctness.
Key metrics such as hit rate are emphasized; a hit rate above 50 % is considered beneficial, while below 10 % suggests the cache may be unnecessary.
Finally, the article summarizes that Guava LoadingCache, cache eviction policies, and simple LRU implementations are frequent interview topics, and that proper monitoring (e.g., recordStats ) and sizing are essential for effective caching.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.