Unlocking StampedLock: How Java’s New Lock Boosts Concurrency
This article explains the design, features, usage patterns, and internal implementation of Java's StampedLock, comparing it with ReentrantReadWriteLock, showing code examples, optimistic reading techniques, and the lock's queue mechanics for improved multi‑threaded performance.
1. Introduction to StampedLock
StampedLock was introduced in JDK 1.8 as an enhanced replacement for ReentrantReadWriteLock, offering finer‑grained control, lock conversion, and an optimistic read mode that can improve concurrency when reads dominate writes.
1.1 Why StampedLock was introduced
Why add StampedLock when ReentrantReadWriteLock already exists?
StampedLock is intended as an internal utility for building other thread‑safe components; used well it can raise performance, but misuse may cause deadlocks.
1.2 Features of StampedLock
Key characteristics include:
All lock acquisition methods return a stamp; 0 indicates failure.
All unlock methods require the same stamp returned by a successful acquisition.
StampedLock is non‑reentrant; a thread holding the write lock cannot reacquire it.
Three access modes: Reading (like a read lock), Writing (like a write lock), Optimistic reading (a lightweight read mode).
Supports conversion between read and write locks, allowing more flexible usage scenarios.
Neither read nor write locks support Condition waiting.
Optimistic reading allows a thread to read without blocking writers, but the result must be validated because the data may have changed.
2. StampedLock Usage Example
Oracle official example demonstrates basic operations:
class Point {
private double x, y;
private final StampedLock sl = new StampedLock();
void move(double deltaX, double deltaY) {
long stamp = sl.writeLock();
try {
x += deltaX;
y += deltaY;
} finally {
sl.unlockWrite(stamp);
}
}
/** Use optimistic read */
double distanceFromOrigin() {
long stamp = sl.tryOptimisticRead();
double currentX = x, currentY = y;
if (!sl.validate(stamp)) {
stamp = sl.readLock();
try {
currentX = x;
currentY = y;
} finally {
sl.unlockRead(stamp);
}
}
return Math.sqrt(currentX * currentX + currentY * currentY);
}
void moveIfAtOrigin(double newX, double newY) {
long stamp = sl.readLock();
try {
while (x == 0.0 && y == 0.0) {
long ws = sl.tryConvertToWriteLock(stamp);
if (ws != 0L) {
stamp = ws;
x = newX;
y = newY;
break;
} else {
sl.unlockRead(stamp);
stamp = sl.writeLock();
}
}
} finally {
sl.unlock(stamp);
}
}
}The distanceFromOrigin method illustrates the optimistic read pattern: acquire an optimistic stamp, copy fields, validate, and fall back to a regular read lock if validation fails.
long stamp = lock.tryOptimisticRead();
copyVariablesToThreadMemory();
if (!lock.validate(stamp)) {
stamp = lock.readLock();
try {
copyVariablesToThreadMemory();
} finally {
lock.unlockRead(stamp);
}
}
useThreadMemoryVariables();3. StampedLock Internals
3.1 Internal constants
StampedLock uses a CLH‑style queue and bitwise state fields. The write‑lock flag is the 8th bit (WBIT), read locks occupy bits 0‑6 (RBITS), and overflow beyond 126 readers is stored in readerOverflow.
For multi‑core CPUs StampedLock performs spinning before enqueuing threads:
private static final int NCPU = Runtime.getRuntime().availableProcessors();
private static final int SPINS = (NCPU > 1) ? 1 << 6 : 0;
private static final int HEAD_SPINS = (NCPU > 1) ? 1 << 10 : 0;
private static final int MAX_HEAD_SPINS = (NCPU > 1) ? 1 << 16 : 0;3.2 Queue mechanics
Assume threads A‑E perform the following operations: <code>// ThreadA: writeLock() // ThreadB: readLock() // ThreadC: readLock() // ThreadD: writeLock() // ThreadE: readLock() </code>
1. Thread A acquires write lock
writeLocksets the write‑lock bit via CAS; if the lock is free it succeeds immediately.
2. Thread B attempts read lock
Because the write lock is held, readLock enqueues a read node and blocks.
3. Thread C attempts read lock
Thread C is linked into the cowait stack of the preceding read node, forming a LIFO stack of waiting readers.
4. Thread D attempts write lock
Thread D is enqueued as a write node at the tail of the queue.
5. Thread E attempts read lock
Since the tail is a write node, Thread E is linked directly after it.
When Thread A releases the write lock via unlockWrite, the lock state is cleared and the queue head is examined. If the head node is waiting, release wakes it. For a read node, all nodes in its cowait stack are also unparked, granting the read lock to multiple waiting readers simultaneously.
public void unlockWrite(long stamp) {
if (state != stamp || (stamp & WBIT) == 0L)
throw new IllegalMonitorStateException();
state = (stamp += WBIT) == 0L ? ORIGIN : stamp;
WNode h = whead;
if (h != null && h.status != 0)
release(h);
}
private void release(WNode h) {
if (h != null) {
WNode q; Thread w;
U.compareAndSwapInt(h, WSTATUS, WAITING, 0);
if ((q = h.next) == null || q.status == CANCELLED) {
for (WNode t = wtail; t != null && t != h; t = t.prev)
if (t.status <= 0) q = t;
}
if (q != null && (w = q.thread) != null)
U.unpark(w);
}
}The process repeats: readers acquire the lock, writers wait, and the queue evolves until all threads have completed.
4. Summary
StampedLock’s queue differs from the classic AQS queue by using a cowait stack for consecutive readers, allowing bulk wake‑up of waiting readers when a write lock is released. It provides higher throughput on multi‑core systems but requires careful use: it is non‑reentrant, and optimistic reads must always be validated.
When using StampedLock, avoid re‑entrance and follow the optimistic‑read template to prevent data inconsistency.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Programmer DD
A tinkering programmer and author of "Spring Cloud Microservices in Action"
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
