Fundamentals 8 min read

Understanding the Java Memory Model: Data Sharing, Race Conditions, and Visibility Solutions

This article explains the Java Memory Model, covering how it validates reads, the distinction between shared and exclusive memory areas, data‑race scenarios with example code, visibility challenges, and the JMM‑based solutions such as volatile, synchronized, and memory barriers to prevent harmful reordering.

Xiaokun's Architecture Exploration Notes
Xiaokun's Architecture Exploration Notes
Xiaokun's Architecture Exploration Notes
Understanding the Java Memory Model: Data Sharing, Race Conditions, and Visibility Solutions

1. JMM Model Description

Given a program and an execution trace that checks program legality, JMM checks each read in the trace and verifies whether the observed write is valid according to certain rules.

The behavior allowed by JMM is that, regardless of how the code is implemented, the program’s results must be consistent with the JMM’s expected outcomes.

Because of this, implementers have freedom to transform code, including reordering operations or removing unnecessary synchronization.

2. JMM Data Sharing and Competition

Thread Shared and Exclusive Areas

Thread‑shared area: method area and heap in the JVM runtime data area, where data variables reside and may suffer data races (read/write safety issues).

Thread‑exclusive area: per‑thread private memory (e.g., local variables, ThreadLocal, ThreadLocalRandom) that does not experience data races.

JMM diagram
JMM diagram
Thread Communication Generates Data Race
<code>// constant.java
final int P = 10;
final int C = 20;

// shared.java
int pwrite = 0;
int cwrite = 0;

// producer.java
int pread = 0;
public void run() {
    pread = cwrite; // producer reads consumer's cwrite
    pwrite = P;
}

// consumer.java
int cread = 0;
public void run() {
    cread = pwrite; // consumer reads producer's pwrite
    cwrite = C;
}
</code>

Result analysis based on code order: If the observed result holds, then cwrite = C must occur before pread = cwrite . Since cwrite = C occurs after cread = pwrite , cread = pwrite must happen before pread = cwrite , which contradicts the expected ordering. Therefore the assumed result is impossible.

Problems: Does a write performed by one thread guarantee that another thread reads the written value? Because each thread has its own working memory, a read may obtain a stale (dirty) value from a cache rather than the latest write.

Possible execution order after JMM optimizations: 3‑1‑2‑4, leading to pread == C and cread == P simultaneously.

Data race definition: One thread writes to a variable while another thread reads the same variable without synchronization.

Consequences: Reordering optimizations permitted by JMM can cause output that differs from the programmer’s expectation, potentially breaking business logic.

JMM Concurrency Issues

Read may not see the most recent write (visibility problem caused by caches).

Reordering for performance can make results diverge from expectations.

3. JMM Visibility Solutions

Thread Working Memory

JMM abstracts a thread’s working memory (local memory) which includes variables stored on the thread stack (local variables, method parameters, exception parameters) and CPU caches.

Interaction between working memory and main memory is governed by the JMM, providing visibility guarantees.

Working memory and main memory
Working memory and main memory

Cache‑related solutions: JMM mandates that certain operations (volatile, synchronized, final, memory‑synchronizing instructions) bypass working memory and read directly from main memory.

Reordering

Rule: as‑if‑serial – despite any compiler or processor reordering, a single‑threaded program’s observable behavior must remain unchanged.

Types of reordering: Compiler reordering: Java server‑mode compiler may optimize code under single‑thread semantics. Processor reordering: CPUs may reorder instructions when no data dependencies exist.

Solutions: Compiler respects JMM‑specific markers (e.g., synchronization flags) to forbid reordering. Memory barriers are inserted before generating machine code to prevent processor reordering.

Reference for memory‑barrier types: CPU Cache and Memory Barriers – https://mp.weixin.qq.com/s?__biz=MzI5MTIyODc4NA==∣=2247483921&idx=1&sn=f304d07bf4d42cb864289ba045d7b1dc&chksm=ec129f0edb651618971893cbc38b4efb3e12420a804ca4aecb082703ae4edb4526ffd89a5526&scene=21#wechat_redirect

JavaConcurrencyMemory ModelVisibilityData RaceReordering
Xiaokun's Architecture Exploration Notes
Written by

Xiaokun's Architecture Exploration Notes

10 years of backend architecture design | AI engineering infrastructure, storage architecture design, and performance optimization | Former senior developer at NetEase, Douyu, Inke, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.