Inside Guava Cache: Segments, Locks, LRU and Why It Lags Behind Caffeine
This article dissects Guava Cache's source code, explaining its segmented lock architecture, data structures, put/get implementations, cleanup and eviction mechanisms, and compares its performance and design choices with the more modern Caffeine cache library.
Guava Cache Architecture Overview
Guava Cache uses a segmented design where the cache is divided into multiple
Segmentobjects, each protected by a
ReentrantLock. Each segment stores entries in an
AtomicReferenceArrayof buckets, with entries linked via a singly‑linked list using a final
nextfield. The cache maintains three LRU‑based queues (
accessQueue,
writeQueue,
recencyQueue) to track access order, write order and recent reads.
Key Data Structures
Segment: holds the bucket array, lock, counters, and queues.
AtomicReferenceArray<ReferenceEntry<K,V>>: the bucket table.
ReferenceEntryimplementations (
StrongEntry,
StrongAccessWriteEntry) store key, hash, value reference and timestamps.
Queues implement LRU eviction based on access/write timestamps.
Put Operation
The
putmethod hashes the key, routes to the appropriate segment via
segmentFor, locks the segment, performs pre‑write cleanup, expands the table if needed, and either updates an existing entry or inserts a new one using head‑insertion. It records writes, updates the LRU queues, and triggers eviction when the segment exceeds its weight limit.
public V put(K key, V value) {
checkNotNull(key);
checkNotNull(value);
int hash = hash(key);
return segmentFor(hash).put(key, hash, value, false);
}
V Segment.put(K key, int hash, V value, boolean onlyIfAbsent) {
lock();
try {
long now = map.ticker.read();
preWriteCleanup(now);
// expand if needed, locate bucket, insert or replace entry
// update queues and possibly evict
} finally {
unlock();
postWriteCleanup();
}
}Get Operation
The
getmethod also routes to a segment but reads without acquiring the segment lock. It checks for expiration, records a read in
recencyQueue, and returns the value if present. If the entry is missing or loading, it creates a
LoadingValueReference, loads the value synchronously, and stores it.
public V get(K key) throws ExecutionException {
int hash = hash(checkNotNull(key));
return segmentFor(hash).get(key, hash, defaultLoader);
}
V Segment.get(K key, int hash, CacheLoader<? super K, V> loader) {
// try fast path without lock, if miss load synchronously under lock
}Lifecycle Management Methods
preWriteCleanup: runs under lock, drains reference queues, expires entries, and resets
readCount.
expireEntries: removes entries whose access or write timestamps exceed configured expiration.
expand: doubles the bucket array size, rehashes entries, and copies or deep‑copies nodes.
evictEntries: uses
accessQueueto evict least‑recently used entries when weight limits are exceeded.
postWriteCleanup: processes pending removal notifications.
Comparison with Caffeine
Guava’s segmented lock design mirrors the pre‑JDK‑8
ConcurrentHashMapand incurs higher contention than Caffeine, which builds on the modern
ConcurrentHashMapand employs CAS, lightweight synchronization, and a TinyLFU eviction policy. Caffeine also provides a time‑wheel for expiration and avoids cache‑line false sharing, offering superior performance for high‑concurrency workloads.
When to Choose Guava
For applications with modest concurrency requirements or where adding an extra dependency is undesirable, Guava Cache remains a viable option because it is already bundled with many projects.
JD Tech Talk
Official JD Tech public account delivering best practices and technology innovation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.