Fundamentals 9 min read

Understanding Java volatile: Memory Semantics, Barriers, and Practical Examples

This article explains the purpose, usage scenarios, and memory semantics of Java's volatile keyword, demonstrates its behavior with code examples and memory barrier concepts, and summarizes how volatile ensures visibility and ordering across threads.

Xiaokun's Architecture Exploration Notes
Xiaokun's Architecture Exploration Notes
Xiaokun's Architecture Exploration Notes
Understanding Java volatile: Memory Semantics, Barriers, and Practical Examples

In earlier sections we discussed the use of volatile and its atomicity; a volatile -qualified variable allows a thread's write to be visible to other threads, effectively forcing reads to fetch the latest value from main memory.

1. Purpose and Usage Scenarios of volatile

volatile rules and purpose

Based on the Happen-Before principle, a write to a volatile variable happens-before any subsequent read of that variable.

According to memory semantics, a volatile read can immediately see the latest value written by another thread, preserving write‑then‑read order.

Single‑step operations on a volatile variable are atomic.

The underlying processor uses memory barriers to invalidate working memory, preventing CPU reordering.

Scenarios for using volatile

When a variable must be visible across multiple CPU registers on a multi‑core machine.

When data resides in main memory (shared memory defined by the JMM) and the overhead of volatile is cheaper than using locks.

2. volatile Memory Semantics

Source code
<code>// shared.java
volatile boolean finished = false;
producer(){
    TimeUnit.SECONDS.sleep(1);
    finished = true;
    System.out.println("have finished product done ....");
}
consumer(){
    while (!finished){
        // nothing
    }
    System.out.println("have consume product done " + full);
}
// producer.java
run(){
    producer();
}
// consumer.java
run(){
    consumer();
}</code>
Memory demonstration diagram

Code initialization copies data to working memory.

Producer writes; volatile ensures the write is flushed to main memory.

Execution result With volatile , the consumer thread exits the loop and the program finishes normally. Without volatile , the consumer keeps reading a stale value and loops indefinitely.

Potential question: why does reading a volatile variable cause the working cache to become invalid?

This leads to the discussion of memory barriers and the implementation of memory semantics.

3. Implementation of volatile Memory Barriers

Memory barriers are implemented at the processor level; we examine JVM source for the relevant CPU architectures.

About ARM (instruction set) reference

dmb: Data Memory Barrier

ish: DMB operation only applies to the inner shareable domain.

ishld: DMB operation that waits only for loads to complete, applying to the inner shareable domain.

JVM aarch64 volatile memory barrier description

ldar&lt;x&gt; represents the volatile read instruction.

stlr&lt;x&gt; represents the volatile write instruction.

<code>// AArch64 has ldar<x> and stlr<x> instructions which we can safely
// use to implement volatile reads and writes. For a volatile read
// we simply need
//   ldar<x>
// and for a volatile write we need
//   stlr<x>
</code>

Read/write barrier flow

<code>// read barrier
//   ldr<x>            // read volatile data
//   dmb ishld         // memory barrier to prevent reordering with following a++
// write barrier
//   dmb ish           // memory barrier before write
//   str<x>            // write volatile data
//   dmb ish           // memory barrier after write
</code>

Read/write barrier pseudocode implementation

<code>// demo.java
volatile int j = 0;
// threadA write
run(){
    j = 9;
}
// threadB read
run(){
    a = j;
    a++;
}
</code>

Conversion to aarch pseudo‑instructions

<code>// write barrier
threadA run(){
    dmb ish            // prevent reordering before volatile write
    str<i>            // j = 9, cache invalidated and flushed to main memory
    dmb ish            // prevent reordering after write
}
// read barrier
threadB run(){
    ldr<x>            // read j
    dmb ishld         // memory barrier, prevent reordering with a++
    a++;
}
</code>

Result analysis Reordering aims to prioritize certain reads or allow local CPU tasks to proceed without main memory. For volatile writes, the added memory barrier prevents reordering by forcing the cache to be invalidated and data to be fetched from main memory. Similarly, volatile reads use a barrier to fetch the latest value from main memory, avoiding reordering.

4. Summary of volatile Operation

Question

The compiler may still reorder code, but when bytecode is translated to machine code the presence of volatile triggers insertion of memory barriers that prevent CPU‑level reordering, so the two mechanisms are not contradictory.

Conclusion

Writing a volatile variable invalidates the local cache (working memory) and flushes the value to main memory.

Reading a volatile variable invalidates the local cache and reloads the latest value from main memory.

The memory semantics of volatile rely on memory barriers, ensuring that reads always see the most recent writes.

Thank you for taking the time to read; if you found this useful, sharing or liking is greatly appreciated.

JavaConcurrencyvolatileMemory Modelmemory-barrier
Xiaokun's Architecture Exploration Notes
Written by

Xiaokun's Architecture Exploration Notes

10 years of backend architecture design | AI engineering infrastructure, storage architecture design, and performance optimization | Former senior developer at NetEase, Douyu, Inke, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.