Unveiling Guava Cache Internals: Why It Lags Behind Caffeine
This article dissects Guava Cache's source code, explaining its segment‑based locking, data structures, put/get implementations, cleanup and eviction mechanisms, and then contrasts its performance and design choices with the more modern Caffeine cache, highlighting why Guava falls short.
In this article we analyze Guava Cache's source code to understand its implementation principles and explain why its performance is inferior to Caffeine Cache.
Guava Cache uses a segmented design where each Segment is protected by a ReentrantLock. The cache is divided into multiple segments (e.g., four segments for a cache of size 1000), each managing its own entries with an AtomicReferenceArray of buckets and three LRU‑based queues ( accessQueue, writeQueue, recencyQueue).
static class Segment<K, V> extends ReentrantLock { ... }Each entry is stored in a singly‑linked list within a bucket; new entries are inserted at the head (head‑insertion). The next field of an entry is final, ensuring immutable link updates and fast traversal.
static class StrongEntry<K, V> extends AbstractReferenceEntry<K, V> { final K key; final int hash; final ReferenceEntry<K, V> next; ... }The put operation first locks the target segment, performs pre‑write cleanup, expands the segment if necessary, then either updates an existing entry or creates a new one. It records write timestamps, updates LRU queues, and triggers eviction if the segment exceeds its weight limit.
public V put(K key, V value) { lock(); try { long now = map.ticker.read(); preWriteCleanup(now); ... setValue(newEntry, key, value, now); evictEntries(newEntry); } finally { unlock(); postWriteCleanup(); } }Pre‑write cleanup ( preWriteCleanup) handles reference queue draining and expiration of entries based on access or write timestamps. It also resets the readCount counter used to trigger cleanup after many reads without writes.
void preWriteCleanup(long now) { runLockedCleanup(now); }When a segment grows, the expand method doubles the bucket array size, rehashes entries, and copies them using head‑insertion while preserving existing order where possible.
void expand() { AtomicReferenceArray<ReferenceEntry<K, V>> oldTable = table; ... AtomicReferenceArray<ReferenceEntry<K, V>> newTable = newEntryArray(oldCapacity << 1); ... }Eviction ( evictEntries) is driven by the LRU queues. If the total weight exceeds the segment's maximum, entries are removed from the head of accessQueue until the weight constraint is satisfied.
void evictEntries(ReferenceEntry<K, V> newest) { while (totalWeight > maxSegmentWeight) { ReferenceEntry<K, V> e = getNextEvictable(); removeEntry(e, e.getHash(), RemovalCause.SIZE); } }Post‑write cleanup mainly processes removal notifications; in this example no removal listener is configured, so the method is trivial.
void postWriteCleanup() { runUnlockedCleanup(); }The get path is lock‑free for reads. It routes the key to the appropriate segment, checks for expiration, records a read in recencyQueue, and updates hit statistics. If the entry is missing or expired, it may trigger a cleanup based on the readCount threshold.
V get(K key) { int hash = hash(key); return segmentFor(hash).get(key, hash, loader); }Compared with Caffeine, Guava’s segment‑lock design (inherited from JDK 1.7’s ConcurrentHashMap) is less efficient than Caffeine’s modern approach that builds on JDK 1.8+ ConcurrentHashMap with CAS, minimal synchronization, and advanced eviction policies such as TinyLFU and a TimeWheel for expiration. Caffeine also avoids cache‑line false sharing and provides richer features, making it a superior choice for high‑performance caching, while Guava remains a viable option for simpler, low‑throughput scenarios.
JD Cloud Developers
JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
