Fundamentals 21 min read

Understanding Java volatile: Visibility, Atomicity, and Instruction Reordering

This article provides a comprehensive overview of Java's volatile keyword, explaining its pronunciation, role in the Java Memory Model, the three visibility guarantees, how it prevents instruction reordering, its lack of atomicity, practical code examples, and best‑practice scenarios such as double‑checked locking.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Understanding Java volatile: Visibility, Atomicity, and Instruction Reordering

1. How to pronounce volatile?

In English it is pronounced ˈvɒlətaɪl (British) or ˈvɑːlətl (American). In Java, volatile is a lightweight synchronization mechanism.

2. What does volatile do in Java?

Ensures visibility of changes across threads.

Does not guarantee atomicity.

Prevents instruction reordering.

3. Java Memory Model (JMM)

The JMM defines how variables are stored in main memory and accessed by threads via their own working memory (stack, registers, caches). It specifies rules such as all variables residing in main memory, each thread having its own working memory, and all interactions occurring through main memory.

3.1 Why do we need JMM?

It abstracts away hardware and OS memory access differences.

3.2 What is JMM?

Defines access rules for program variables.

Specifies how variable values are stored in memory.

Specifies how variable values are retrieved from memory.

3.3 Two memory areas in JMM

Main memory : Java heap objects and physical RAM.

Working memory : Thread‑local stack area, registers, caches.

3.4 JMM specifications

All variables are stored in main memory.

Main memory is part of the JVM memory.

Each thread has its own working memory.

Working memory holds a copy of variables from main memory.

Thread operations on variables must occur in working memory.

Threads cannot directly access another thread's working memory.

Variable values are transferred between threads via main memory.

4. Example: volatile visibility

Without volatile, a main thread may never see a change made by a child thread to a shared field.

class ShareData {
    int number = 0;
    public void setNumberTo100() { this.number = 100; }
}

Running a child thread that sleeps 3 seconds then calls setNumberTo100() while the main thread loops on number == 0 demonstrates the visibility problem.

5. Why can other threads perceive the update?

Modern CPUs use a snooping protocol (MESI) to keep caches coherent. When a volatile write occurs, a memory barrier forces caches to invalidate stale copies, ensuring other CPUs see the new value.

5.1 Cache coherence

Without coherence, different CPUs may hold divergent copies of the same variable.

5.2 MESI protocol

Writes to a shared variable trigger a StoreStore barrier, making previous writes visible before the volatile write.

5.3 Bus snooping

Each CPU continuously monitors the bus to detect writes to memory locations it caches.

5.4 Bus storm warning

Heavy use of snooping can saturate the bus; therefore volatile should be used judiciously, preferring locks when appropriate.

6. Volatile does not guarantee atomicity

When 20 threads each increment a shared volatile int number 1000 times, the final value is often less than the expected 20000 because number++ consists of three separate byte‑code instructions (getstatic, iadd, putstatic) that can interleave.

public static volatile int number = 0;
public static void increase() { number++; }

Even though the volatile read sees the latest value, other threads may modify the variable between the read and write steps.

7. Ensuring atomicity

7.1 Synchronized block

public synchronized static void increase() { number++; }

This guarantees atomicity but incurs blocking overhead.

7.2 AtomicInteger

static AtomicInteger atomicInteger = new AtomicInteger();
atomicInteger.getAndIncrement();

AtomicInteger provides lock‑free atomic increments, consistently yielding the correct total.

8. Preventing instruction reordering

Compilers and CPUs may reorder instructions for performance. Volatile inserts memory barriers (StoreStore, StoreLoad, LoadLoad, LoadStore) to forbid such reordering around volatile reads/writes.

8.1 Why reorder?

To improve throughput while preserving single‑thread semantics.

8.2 Types of reordering

Compiler‑level optimization.

CPU‑level instruction‑level parallelism.

Memory‑system reordering due to caches and buffers.

8.3 Example of reordering affecting correctness

If a thread writes num = 1; flag = true; but the writes are reordered, another thread may see flag == true before num == 1 , leading to unexpected results.

8.4 How volatile inserts barriers

Before a volatile write, a StoreStore barrier ensures prior ordinary writes are visible; after the write, a StoreLoad barrier prevents subsequent reads from moving before the write. Similar LoadLoad and LoadStore barriers are applied to volatile reads.

9. Common volatile use cases

Double‑checked locking for lazy singleton initialization requires the instance reference to be declared volatile to avoid seeing a partially constructed object.

private static volatile VolatileSingleton instance = null;
public static VolatileSingleton getInstance() {
    if (instance == null) {
        synchronized (VolatileSingleton.class) {
            if (instance == null) {
                instance = new VolatileSingleton();
            }
        }
    }
    return instance;
}

10. Why use volatile if it lacks atomicity?

Volatile is a lightweight synchronization mechanism with lower performance cost than synchronized or explicit locks. It is ideal for simple status flags or loop‑exit conditions where writes do not depend on the current value.

Use volatile only when: Writes do not depend on the current value or are performed by a single thread. The variable is not part of a larger invariant. No lock is required for access.

11. Differences between volatile and synchronized

volatile can only modify fields; synchronized can protect methods or blocks.

volatile does not guarantee atomicity; synchronized does.

volatile never blocks; synchronized may block.

volatile is a lightweight lock; synchronized is heavyweight.

Both provide visibility and ordering guarantees.

12. Summary

volatile ensures visibility across threads.

volatile prevents instruction reordering in a single thread.

volatile does not guarantee atomicity for compound actions like ++ .

64‑bit long/double reads/writes are atomic when declared volatile.

volatile is useful in double‑checked locking and simple flag checks.

Source code is available at https://gitee.com/jayh2018/PassJava-Learning .

JavaconcurrencyvolatileMemory ModelJMMAtomicityinstruction reordering
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.