Unlock Java’s Memory Model: How Threads See Shared Data
This article explains the Java Memory Model, how the JVM organizes memory into thread stacks and heap, how threads interact with shared variables, the impact of hardware memory architecture, and practical techniques like volatile and synchronized to ensure visibility and avoid race conditions.
Java Memory Model (JMM)
The Java Memory Model defines how the Java Virtual Machine (JVM) works with computer memory (RAM) and is essential for designing correct concurrent programs. It specifies when and how different threads see values written by other threads to shared variables and how synchronization is performed.
The original JMM had shortcomings; it was revised in Java 1.5 and remains in use through Java 14+.
JVM Internal Memory Model
The JVM divides memory into thread stacks and heap . Each thread has its own stack that stores method call information and local variables. Primitive local variables (boolean, byte, short, char, int, long, float, double) reside entirely on the stack and are invisible to other threads.
Objects created by the application are stored on the heap, regardless of which thread creates them. This includes wrapper objects for primitive types (Byte, Integer, Long, etc.). Static class variables also reside on the heap.
Each thread’s stack contains its own copies of local variables. When a thread executes a method that accesses an object, the reference is stored on the stack while the actual object remains on the heap. Member variables of an object are stored on the heap together with the object; if a member is a primitive, it is still on the heap.
The following code illustrates these concepts:
public class MyRunnable implements Runnable {
public void run() {
methodOne();
}
public void methodOne() {
int localVariable1 = 45;
MySharedObject localVariable2 = MySharedObject.sharedInstance;
// ... more operations using local variables.
methodTwo();
}
public void methodTwo() {
Integer localVariable1 = new Integer(99);
// ... more operations using local variables.
}
}
public class MySharedObject {
// static variable pointing to a shared instance
public static final MySharedObject sharedInstance = new MySharedObject();
// member variables pointing to objects on the heap
public Integer object2 = new Integer(22);
public Integer object4 = new Integer(44);
public long member1 = 12345;
public long member2 = 67890;
}When two threads run run(), each creates its own copies of the primitive localVariable1 on its stack, while both copies of localVariable2 reference the same shared object on the heap (the static sharedInstance). The object’s member variables ( object2, object4, member1, member2) also reside on the heap and are shared.
Hardware Memory Architecture
Modern CPUs have registers, one or more cache levels, and main memory (RAM). Each CPU can run a thread, and multiple CPUs can run concurrently. Registers are the fastest memory, followed by CPU caches, then main memory.
When a CPU needs data from main memory, it loads a portion into its cache, possibly into registers for computation. Writes are first flushed to the cache and later to main memory. Cache lines are the units transferred between cache and main memory.
Bridging JMM and Hardware Architecture
Hardware does not distinguish between thread stacks and heap; both reside in main memory and may be cached. This can cause visibility problems when threads read/write shared variables.
Visibility of Shared Objects
If a thread updates a shared object without using volatile or synchronization, other threads may not see the change because the updated value may remain in a CPU cache and not be flushed to main memory.
Race Conditions
When multiple threads concurrently modify a shared variable without proper synchronization, each may work on its own cached copy, leading to lost updates. For example, two threads increment a shared count variable; without synchronization, the final value may increase by only one instead of two.
Solutions
Use the volatile keyword to force a variable to be read from and written to main memory directly. Use synchronized blocks to ensure that only one thread executes a critical section at a time; entering a synchronized block causes all accessed variables to be read from main memory, and exiting flushes updates back, regardless of whether they are declared volatile.
Cognitive Technology Team
Cognitive Technology Team regularly delivers the latest IT news, original content, programming tutorials and experience sharing, with daily perks awaiting you.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
