Understanding Java Memory Model (JMM): Core Concepts and Guarantees
This article explains the Java Memory Model (JMM), detailing its purpose, the distinction from JVM memory structure, the eight memory interaction operations, the happens‑before principle, memory barriers, and common pitfalls such as double‑checked locking and visibility bugs, with concrete code examples.
Essence and Role of JMM
Definition
Java Memory Model (JMM) defines the abstract rules for accessing shared variables in a multithreaded environment. It specifies how threads interact with main memory and working memory, addressing three core problems: visibility (when a write becomes visible to other threads), ordering (constraints on instruction reordering), and atomicity (operations that must appear indivisible).
Difference from JVM memory structure
JVM memory structure describes the concrete division and management of physical memory during program execution, focusing on allocation, garbage collection, and memory‑leak concerns. JMM, in contrast, abstracts thread‑communication rules such as synchronized, volatile, and concurrent utilities, and is defined by the Java Language Specification.
Core components of JMM
Eight memory‑interaction actions
lock – locks a main‑memory variable, e.g., at the entry of a synchronized block. The lock marks the variable as exclusively owned by the thread. Implemented by the monitorenter bytecode.
synchronized(sharedObject) { // implicit lock
// critical section
} // implicit unlockunlock – releases the lock, e.g., at the exit of a synchronized block. It forces a flush of all working‑memory changes to main memory and prevents reordering of instructions outside the lock. Implemented by the monitorexit bytecode with an implicit StoreLoad barrier.
read – reads a variable from main memory into working memory, typically when a thread first accesses a shared variable.
load – copies the value obtained by read into the thread’s private working‑memory copy.
use – uses the working‑memory value in the execution engine, e.g., evaluating i++. Bytecode example:
iload_1 // use operation (load from working memory to operand stack)
iconst_1
iaddassign – assigns a computation result to a working‑memory variable without immediately updating main memory.
boolean running = true; // without volatile
void stop() {
running = false; // assign, main memory not updated
}store – writes the value back to main memory, e.g., a volatile write forces an immediate write.
volatile int v = 0;
void update() {
v = 1; // assign → store → write happens immediately
}Happens‑before principle
The happens‑before relation defines ordering guarantees across threads, solving visibility, ordering, and causality problems.
Program order rule
Within a single thread, operations execute in the order written in the code.
int x = 0, y = 0;
void threadA() {
x = 1; // operation 1
y = 2; // operation 2
}
void threadB() {
if (y == 2) {
System.out.println(x); // may print 0 because cross‑thread order is not guaranteed
}
}Lock rule
Unlock actions happen before subsequent lock actions on the same monitor.
synchronized (lock) {
x = 1; // operation 1
} // unlock
// other thread...
synchronized (lock) {
System.out.println(x); // guaranteed to see x = 1
}Volatile rule
A volatile write happens before any subsequent volatile read of the same variable.
volatile boolean flag = false;
void writer() {
x = 42; // ordinary write (may be reordered)
flag = true; // volatile write
}
void reader() {
if (flag) { // volatile read
System.out.println(x); // guaranteed to see 42
}
}Memory barrier semantics
Write operations are followed by StoreStore + StoreLoad barriers; read operations are preceded by LoadLoad + LoadStore barriers.
Thread start rule
Actions before Thread.start() happen before any actions in the started thread.
int x = 0;
void mainThread() {
x = 1;
new Thread(() -> {
System.out.println(x); // guaranteed to see 1
}).start();
}Thread termination rule
All actions in a thread happen before another thread detects its termination (e.g., via join).
int result;
void worker() {
result = compute(); // operation 1
}
void main() throws InterruptedException {
Thread t = new Thread(this::worker);
t.start();
t.join(); // wait for termination
System.out.println(result); // guaranteed to see final result
}Transitivity rule
If A happens‑before B and B happens‑before C, then A happens‑before C.
volatile boolean v = false;
int x = 0;
void thread1() {
x = 1; // operation 1
v = true; // operation 2 (volatile write)
}
void thread2() {
if (v) { // operation 3 (volatile read)
System.out.println(x); // operation 4 (must see x = 1)
}
}Typical violations of happens‑before
Double‑checked locking failure – object construction can be reordered with the assignment, breaking the lock‑rule transitivity.
class Singleton {
private static Singleton instance; // missing volatile
public static Singleton getInstance() {
if (instance == null) { // first check
synchronized (Singleton.class) {
if (instance == null) { // second check
instance = new Singleton(); // may be reordered
}
}
}
return instance;
}
}Fix: declare instance as volatile.
Visibility‑induced infinite loop – a non‑volatile flag may never become visible to another thread.
boolean running = true; // non‑volatile
void stop() {
running = false; // may never become visible
}
void work() {
while (running) {
// ...
}
}Fix: make
running volatileor protect it with synchronized.
JMM implementation mechanisms
Memory barriers
Barriers prevent specific reorderings and enforce visibility.
LoadLoad barrier – prevents reordering of reads before and after the barrier.
// pseudo‑code showing implicit LoadLoad barrier
value = readVolatile(); // LoadLoad barrier implicitly inserted
data = loadSharedValue();StoreStore barrier – prevents reordering of writes, ensuring earlier writes become visible to other processors.
sharedVar = 1; // ordinary write
// StoreStore barrier (implicitly inserted by volatile write)
volatileFlag = true; // volatile writeLoadStore barrier – prevents a read from being reordered after a subsequent write.
int local = sharedValue; // read
// LoadStore barrier
anotherShared = 42; // writeStoreLoad barrier (full barrier) – prevents write‑read reordering and flushes all pending writes to main memory.
synchronized(this) {
x = 1; // write
// StoreLoad barrier (implicitly inserted by monitorexit)
}
// other threads can now immediately see x = 1Core functions of barriers
Prohibit reordering: block compiler and CPU from optimizing instruction order.
Enforce visibility: ensure cached data is promptly synchronized to main memory.
Preserve ordering: establish happens‑before relationships across threads.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
The Dominant Programmer
Resources and tutorials for programmers' advanced learning journey. Advanced tracks in Java, Python, and C#. Blog: https://blog.csdn.net/badao_liumang_qizhi
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
