Why Non‑Blocking Algorithms Outperform Blocking Ones in Java Concurrency
This article explains the principles of non‑blocking algorithms, compares them with blocking concurrency approaches, and demonstrates Java implementations using volatile variables, atomic classes, optimistic locking, and templates, highlighting their advantages and challenges in multithreaded environments.
In concurrent scenarios, non‑blocking algorithms allow threads to access shared state without suspending any participating thread; an algorithm is non‑blocking if a thread’s suspension never forces other threads to suspend.
Blocking Concurrency Algorithms
A blocking algorithm performs either:
A: Execute the operation requested by the thread; or
B: Block the thread until the operation can be performed safely.
Many concurrent data structures, such as implementations of java.util.concurrent.BlockingQueue, are blocking; a thread inserting into a full queue is suspended until space becomes available.
Non‑Blocking Concurrency Algorithms
A non‑blocking algorithm performs either:
A: Execute the operation requested by the thread; or
B: Notify the thread that the operation cannot be performed.
Java provides several non‑blocking data structures such as AtomicBoolean, AtomicInteger, AtomicLong and AtomicReference.
Difference Between Blocking and Non‑Blocking Algorithms
When a requested operation cannot be performed, a blocking algorithm suspends the thread until it becomes possible, while a non‑blocking algorithm immediately notifies the thread that the operation cannot proceed.
For example, a thread trying to insert into a full BlockingQueue is blocked until another thread removes an element; if that remover thread is itself blocked elsewhere, the inserting thread remains blocked indefinitely.
Non‑Blocking Concurrent Data Structures
In multithreaded systems, data structures used for communication must be protected by a concurrency algorithm; if the protecting algorithm is non‑blocking, the data structure is a non‑blocking concurrent data structure.
Volatile Variables
In Java, volatile variables are always read directly from main memory and written back immediately, ensuring visibility across threads. Writes to a volatile are atomic, but read‑modify‑write sequences are not.
volatile myVar = 0;
int temp = myVar;
temp++;
myVar = temp;If two threads execute this code, both may read the same value, increment it, and write back the same result, causing the variable to increase by only one instead of two.
Single‑Writer Scenario
When only one thread writes to a shared variable and many read, no race condition occurs, making volatile suitable.
public class SingleWriterCounter {
private volatile long count = 0;
/** Only one thread may call this method */
public void inc() {
this.count++;
}
/** Multiple threads may call this method */
public long count() {
return this.count;
}
}Multiple threads can read the counter via count() while only one thread calls inc(), avoiding contention.
Advanced Structures Based on Volatile
By ensuring each volatile is written by a single thread, multiple threads can communicate without blocking.
public class DoubleWriterCounter {
private volatile long countA = 0;
private volatile long countB = 0;
/** Only one thread may call this method */
public void incA() { this.countA++; }
/** Only one thread may call this method */
public void incB() { this.countB++; }
/** Multiple threads may call */
public long countA() { return this.countA; }
/** Multiple threads may call */
public long countB() { return this.countB; }
}Optimistic Locking with Compare‑and‑Swap
When multiple threads need to write to the same variable, volatile is insufficient; a compare‑and‑swap loop using atomic classes provides a non‑blocking solution.
public class SynchronizedCounter {
long count = 0;
public void inc() {
synchronized(this) { count++; }
}
public long count() {
synchronized(this) { return this.count; }
}
}Replacing synchronized blocks with AtomicLong removes blocking:
import java.util.concurrent.atomic.AtomicLong;
public class AtomicCounter {
private AtomicLong count = new AtomicLong(0);
public void inc() {
boolean updated = false;
while(!updated) {
long prev = this.count.get();
updated = this.count.compareAndSet(prev, prev + 1);
}
}
public long count() { return this.count.get(); }
}The compareAndSet operation is atomic and typically implemented by a CPU compare‑and‑swap instruction, avoiding thread suspension.
Why It Is Called Optimistic Locking
Threads work on a local copy of the data, assuming no other thread will modify it; if the assumption holds, the compare‑and‑swap succeeds. If another thread has changed the data, the operation fails and the thread retries, all without acquiring a lock.
Optimistic Locking Is Non‑Blocking
If a thread is blocked while holding a copy, other threads can still access the shared memory, unlike traditional locks where a blocked thread holds the lock for everyone.
Non‑Swappable Data Structures
Optimistic locking works well when the entire data structure can be replaced atomically; for large structures like queues, copying the whole structure for each update is costly.
Sharing Intended Modifications
Instead of copying the whole structure, a thread can publish its intended modification:
Check if another thread has published an intended modification.
If none, create and publish an intended modification using a compare‑and‑swap.
Perform the modification on the shared structure.
Remove the published modification to signal completion.
This step acts as a lock; if a thread publishes a modification and then blocks, the structure becomes effectively locked.
The A‑B‑A Problem
If a variable changes from A to B and back to A, another thread may miss the intermediate change, leading to incorrect assumptions.
A‑B‑A Solutions
One solution is to combine a reference with a counter (or timestamp) and atomically swap both, as Java provides AtomicStampedReference for this purpose.
Non‑Blocking Algorithm Template
The following template illustrates a generic non‑blocking algorithm structure; it is provided for educational purposes only and may contain errors.
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicStampedReference;
public class NonblockingTemplate {
public static class IntendedModification {
public AtomicBoolean completed = new AtomicBoolean(false);
}
private AtomicStampedReference<IntendedModification> ongoingMod =
new AtomicStampedReference<IntendedModification>(null, 0);
// Declare the data structure state here.
public void modify() {
while(!attemptModifyASR());
}
public boolean attemptModifyASR() {
boolean modified = false;
IntendedModification currentlyOngoingMod = ongoingMod.getReference();
int stamp = ongoingMod.getStamp();
if(currentlyOngoingMod == null) {
// Prepare intended modification
IntendedModification newMod = new IntendedModification();
boolean modSubmitted = ongoingMod.compareAndSet(null, newMod, stamp, stamp + 1);
if(modSubmitted) {
// Perform modification via CAS operations
modified = true;
}
} else {
// Try to help complete the ongoing modification
modified = false;
}
return modified;
}
}Non‑Blocking Algorithms Are Hard to Implement
Designing correct non‑blocking algorithms is difficult; before attempting your own, check existing libraries. Java already includes non‑blocking implementations such as ConcurrentLinkedQueue, and third‑party libraries like LMAX Disruptor and Cliff Click’s non‑blocking HashMap are available.
Benefits of Non‑Blocking Algorithms
Choice
When an operation cannot proceed, a thread can choose to retry, back off, or perform other work instead of being forced to wait.
No Deadlocks
Since threads never hold locks that block others, deadlocks cannot occur, though livelocks remain possible.
No Thread Suspension
Avoiding thread suspension eliminates costly context switches, allowing CPUs to spend more time on actual work.
Reduced Thread Latency
Without suspension overhead, threads can respond more quickly once an operation becomes possible, lowering overall latency.
Cognitive Technology Team
Cognitive Technology Team regularly delivers the latest IT news, original content, programming tutorials and experience sharing, with daily perks awaiting you.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
