Deep Dive into Caffeine Cache: getIfPresent, ReadBuffer, and Maintenance Mechanisms
This article explains the inner workings of Caffeine's cache, covering the getIfPresent method, the design of ReadBuffer and StripedBuffer, eviction policies, the maintenance cycle, and the climb algorithm that dynamically adjusts window and protected partitions for optimal performance.
The article continues the exploration of Caffeine's cache after introducing the put method, focusing now on the getIfPresent operation and the surrounding infrastructure that supports read‑through caching.
Example usage of getIfPresent is shown with a simple JUnit test:
public class TestReadSourceCode {
@Test
public void doRead() {
// read constructor
Cache
cache = Caffeine.newBuilder()
.maximumSize(10_000)
.build();
// read put
cache.put("key", "value");
// read get
cache.getIfPresent("key");
}
}The underlying implementation resides in BoundedLocalCache.getIfPresent , which directly accesses the internal ConcurrentHashMap and updates statistics, scheduling buffer drains when necessary.
ReadBuffer is a multiple‑producer/single‑consumer (MPSC) buffer that stores read‑side tasks. Its abstract definition and a disabled implementation are presented:
enum DisabledBuffer implements Buffer
{
INSTANCE;
@Override
public int offer(Object e) { return Buffer.SUCCESS; }
@Override
public void drainTo(Consumer
consumer) {}
@Override
public long size() { return 0; }
@Override
public long reads() { return 0; }
@Override
public long writes() { return 0; }
}When conditions are met, the actual buffer type becomes BoundedBuffer , a ring buffer that grows by powers of two and uses CAS to ensure thread‑safe writes.
final class BoundedBuffer
extends StripedBuffer
{
static final int BUFFER_SIZE = 16;
static final int MASK = BUFFER_SIZE - 1;
static final class RingBuffer
extends BBHeader.ReadAndWriteCounterRef implements Buffer
{
static final VarHandle BUFFER = MethodHandles.arrayElementVarHandle(Object[].class);
final Object[] buffer;
public RingBuffer(E e) {
buffer = new Object[BUFFER_SIZE];
BUFFER.set(buffer, 0, e);
WRITE.set(this, 1);
}
@Override
public int offer(E e) {
long head = readCounter;
long tail = writeCounterOpaque();
long size = (tail - head);
if (size >= BUFFER_SIZE) return Buffer.FULL;
if (casWriteCounter(tail, tail + 1)) {
int index = (int) (tail & MASK);
BUFFER.setRelease(buffer, index, e);
return Buffer.SUCCESS;
}
return Buffer.FAILED;
}
@Override
public void drainTo(Consumer
consumer) {
long head = readCounter;
long tail = writeCounterOpaque();
long size = (tail - head);
if (size == 0) return;
do {
int index = (int) (head & MASK);
@SuppressWarnings("unchecked")
E e = (E) BUFFER.getAcquire(buffer, index);
if (e == null) break;
BUFFER.setRelease(buffer, index, null);
consumer.accept(e);
head++;
} while (head != tail);
setReadCounterOpaque(head);
}
}
}The StripedBuffer implements a segmented design to reduce contention. Its offer method hashes the current thread’s probe value to select a segment and attempts a CAS write; on failure it calls expandOrRetry to grow the table or retry.
final int expandOrRetry(E e, int h, int increment, boolean wasUncontended) {
int result = Buffer.FAILED;
boolean collide = false;
for (int attempt = 0; attempt < ATTEMPTS; attempt++) {
Buffer
[] buffers;
Buffer
buffer;
int n;
if ((buffers = table) != null && (n = buffers.length) > 0) {
if ((buffer = buffers[(n - 1) & h]) == null) {
if (tableBusy == 0 && casTableBusy()) {
boolean created = false;
try {
Buffer
[] rs;
int mask, j;
if ((rs = table) != null && (mask = rs.length) > 0 && (rs[j = (mask - 1) & h] == null)) {
rs[j] = create(e);
created = true;
}
} finally {
tableBusy = 0;
}
if (created) { result = Buffer.SUCCESS; break; }
continue;
}
collide = false;
} else if (!wasUncontended) {
wasUncontended = true;
} else if ((result = buffer.offer(e)) != Buffer.FAILED) {
break;
} else if (n >= MAXIMUM_TABLE_SIZE || table != buffers) {
collide = false;
} else if (!collide) {
collide = true;
} else if (tableBusy == 0 && casTableBusy()) {
try {
if (table == buffers) {
table = Arrays.copyOf(buffers, n << 1);
}
} finally {
tableBusy = 0;
}
collide = false;
continue;
}
h += increment;
} else if (tableBusy == 0 && table == buffers && casTableBusy()) {
boolean init = false;
try {
if (table == buffers) {
Buffer
[] rs = new Buffer[1];
rs[0] = create(e);
table = rs;
init = true;
}
} finally {
tableBusy = 0;
}
if (init) { result = Buffer.SUCCESS; break; }
}
}
return result;
}The maintenance method orchestrates cache upkeep: it drains read and write buffers, processes key/value references, applies expiration and eviction policies, and finally runs the climb step that adapts the eviction configuration based on hit‑rate samples.
void maintenance(@Nullable Runnable task) {
setDrainStatusRelease(PROCESSING_TO_IDLE);
try {
drainReadBuffer();
drainWriteBuffer();
if (task != null) task.run();
drainKeyReferences();
drainValueReferences();
expireEntries();
evictEntries();
climb();
} finally {
if ((drainStatusOpaque() != PROCESSING_TO_IDLE) || !casDrainStatus(PROCESSING_TO_IDLE, IDLE)) {
setDrainStatusOpaque(REQUIRED);
}
}
}Eviction proceeds in three logical spaces (WINDOW, PROBATION, PROTECTED). Elements flow from the window to probation, then to protected, while the algorithm compares frequency‑sketch estimates to decide which node to evict. The admit method uses a TinyLFU‑style threshold and randomization to mitigate hash‑dos attacks.
boolean admit(K candidateKey, K victimKey) {
int victimFreq = frequencySketch().frequency(victimKey);
int candidateFreq = frequencySketch().frequency(candidateKey);
if (candidateFreq > victimFreq) return true;
if (candidateFreq >= ADMIT_HASHDOS_THRESHOLD) {
int random = ThreadLocalRandom.current().nextInt();
return ((random & 127) == 0);
}
return false;
}The climb algorithm monitors hit‑rate changes, computes an adjustment step, and then either expands or shrinks the window partition, demoting elements from protected to probation as needed. This dynamic tuning aligns the cache with the observed access pattern.
void determineAdjustment() {
if (frequencySketch().isNotInitialized()) {
setPreviousSampleHitRate(0.0);
setMissesInSample(0);
setHitsInSample(0);
return;
}
int requestCount = hitsInSample() + missesInSample();
if (requestCount < frequencySketch().sampleSize) return;
double hitRate = (double) hitsInSample() / requestCount;
double hitRateChange = hitRate - previousSampleHitRate();
double amount = (hitRateChange >= 0) ? stepSize() : -stepSize();
double nextStepSize = (Math.abs(hitRateChange) >= HILL_CLIMBER_RESTART_THRESHOLD)
? HILL_CLIMBER_STEP_PERCENT * maximum() * (amount >= 0 ? 1 : -1)
: HILL_CLIMBER_STEP_DECAY_RATE * amount;
setPreviousSampleHitRate(hitRate);
setAdjustment((long) amount);
setStepSize(nextStepSize);
setMissesInSample(0);
setHitsInSample(0);
}Finally, the article provides a technical selection guide, recommending Caffeine for high‑concurrency, high‑performance scenarios that require flexible eviction policies, and lists reference resources.
JD Tech Talk
Official JD Tech public account delivering best practices and technology innovation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.