Why Java’s Memory Model Matters: Unveiling the Hidden Rules of Concurrency
This article explains the hardware memory hierarchy, cache‑coherency issues, processor optimizations and instruction reordering, and shows how the Java Memory Model (JMM) defines eight operations to guarantee visibility, atomicity and ordering for multithreaded Java programs.
Why Do We Need a Memory Model?
To answer this, we first need to understand the traditional computer hardware memory architecture.
Hardware Memory Architecture
(1) CPU – Large servers often have multiple CPUs, each with multiple cores, allowing true concurrent execution of Java threads.
(2) CPU Register – Registers are inside the CPU and are orders of magnitude faster than main memory.
(3) CPU Cache Memory – L2 (and sometimes L3) caches act as a buffer between the fast registers and slower main memory.
(4) Main Memory – The large, slower memory that backs the caches.
Cache‑Coherency Problem
Because the speed gap between CPU and main memory is huge, caches are introduced. When multiple CPUs or cores share the same main memory region, each may cache the data independently, leading to inconsistent copies.
Protocols such as MSI, MESI, MOSI, and Dragon are used to maintain consistency.
Processor Optimizations and Instruction Reordering
To improve performance, processors execute instructions out of order. This is called processor optimization.
Processors reorder execution to keep execution units fully utilized.
Modern compilers, including Java’s JIT, also perform similar optimizations.
Reordering can be classified into three types: Compiler‑level reordering – the compiler may rearrange statements without changing single‑thread semantics. Instruction‑level parallelism – the processor may overlap execution of independent instructions. Memory‑system reordering – caches and write buffers make loads and stores appear out of order.
Concurrency Problems in Java
Those familiar with Java concurrency know three classic issues: visibility, atomicity, and ordering. These correspond directly to cache‑coherency, processor optimization, and instruction reordering.
Simply disabling caches or forbidding optimizations would solve the problems but would cripple performance.
Therefore, a memory model is defined to regulate reads and writes. The Java Memory Model (JMM) solves concurrency issues by
restricting processor optimizationsand
using memory barriers.
Java Memory Model
Different languages implement the same conceptual model with variations. Below we focus on Java’s implementation.
Relation Between JVM Runtime Memory Areas and Hardware Memory
The JVM divides runtime memory into logical regions such as the heap and stack, which do not exist in hardware memory.
Both the stack and heap can reside in caches as well as in main memory, so there is no direct one‑to‑one mapping.
Java Threads and Main Memory
The JMM defines:
All variables are stored in main memory.
Each thread has a private local memory (working memory) that holds a copy of shared variables.
All operations on a variable must occur in the thread’s local memory, never directly on main memory.
Threads cannot directly access each other’s local memory.
Thread Communication Example
If two threads increment a shared variable initially set to 1, the JMM specifies a series of actions to ensure correct results.
The JMM defines eight operations to control interaction between main memory and local memory:
lock: lock a variable in main memory for exclusive use by a thread.
unlock: release a locked variable.
read: transfer a variable’s value from main memory to the thread’s working memory.
load: place the read value into the working memory’s copy.
use: make the working‑memory value available to the execution engine.
assign: write a value from the execution engine into working memory.
store: transfer a working‑memory value back to main memory.
write: write the stored value into main memory.
Note: “working memory” is synonymous with “local memory”.
Takeaway
Because of the speed gap between CPU and main memory, multi‑level caches are introduced, which solve the speed mismatch but create cache‑coherency problems.
To avoid disastrous inconsistencies, a memory model abstracts the hardware and defines rules for reads, writes, and ordering.
Java’s JMM addresses visibility, atomicity, and ordering issues caused by cache‑coherency, processor optimizations, and instruction reordering, and it does so through the eight operations listed above.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.