Fundamentals 10 min read

How False Sharing Slows Java Programs and How to Eliminate It

This article explains what false sharing is in Java, how cache lines and cache‑line invalidation cause performance penalties, and provides concrete code examples and @Contended annotation techniques to detect and fix false sharing for faster multithreaded applications.

Cognitive Technology Team
Cognitive Technology Team
Cognitive Technology Team
How False Sharing Slows Java Programs and How to Eliminate It

False Sharing Illustration

False sharing occurs when two threads on different CPUs write to two distinct variables that reside in the same CPU cache line, causing the cache line to be invalidated in the other CPU's cache each time a write occurs.

False Sharing Illustration
False Sharing Illustration

Cache Lines

When a CPU reads data from lower‑level caches or main memory, it reads an entire cache line (typically 64 bytes) rather than a single byte, so multiple variables are often stored in the same cache line.

Cache Line Invalidation

Writing to a memory address in a cache line marks that line as dirty; the line must then be synchronized with other CPUs that hold a copy, causing those copies to become invalid and requiring a refresh before they can be accessed.

False Sharing Results in a Performance Penalty

Each time a CPU’s cache line is invalidated by another CPU, the invalidated line must be refreshed, forcing the CPU to wait and reducing the number of instructions it can execute, which leads to noticeable performance degradation.

Java False Sharing Code Example

The following classes demonstrate how false sharing can arise in a Java program.

public class Counter {
    public volatile long count1 = 0;
    public volatile long count2 = 0;
}
public class FalseSharingExample {
    public static void main(String[] args) {
        Counter counter1 = new Counter();
        Counter counter2 = counter1;
        long iterations = 1_000_000_000L;
        Thread thread1 = new Thread(() -> {
            long startTime = System.currentTimeMillis();
            for (long i = 0; i < iterations; i++) {
                counter1.count1++;
            }
            long endTime = System.currentTimeMillis();
            System.out.println("total time: " + (endTime - startTime));
        });
        Thread thread2 = new Thread(() -> {
            long startTime = System.currentTimeMillis();
            for (long i = 0; i < iterations; i++) {
                counter2.count2++;
            }
            long endTime = System.currentTimeMillis();
            System.out.println("total time: " + (endTime - startTime));
        });
        thread1.start();
        thread2.start();
    }
}

On a typical laptop this version takes about 36 seconds.

When each thread uses a separate Counter instance, the runtime drops to roughly 9 seconds—a four‑fold speed‑up—because the two counters no longer share the same cache line.

public class FalseSharingExample {
    public static void main(String[] args) {
        Counter counter1 = new Counter();
        Counter counter2 = new Counter();
        long iterations = 1_000_000_000L;
        Thread thread1 = new Thread(() -> {
            long startTime = System.currentTimeMillis();
            for (long i = 0; i < iterations; i++) {
                counter1.count1++;
            }
            long endTime = System.currentTimeMillis();
            System.out.println("total time: " + (endTime - startTime));
        });
        Thread thread2 = new Thread(() -> {
            long startTime = System.currentTimeMillis();
            for (long i = 0; i < iterations; i++) {
                counter2.count2++;
            }
            long endTime = System.currentTimeMillis();
            System.out.println("total time: " + (endTime - startTime));
        });
        thread1.start();
        thread2.start();
    }
}

Fixing False Sharing

The key to eliminating false sharing is to redesign data structures so that variables accessed by different threads are not placed in the same cache line. Storing them in separate objects is a simple and effective approach.

Using @Contended Annotation to Prevent False Sharing

Since Java 8/9, the @Contended annotation (in jdk.internal.vm.annotation) can add padding bytes after a field to ensure it occupies its own cache line.

public class Counter1 {
    @jdk.internal.vm.annotation.Contended
    public volatile long count1 = 0;
    public volatile long count2 = 0;
}

Using @Contended on a Class

@jdk.internal.vm.annotation.Contended
public class Counter1 {
    public volatile long count1 = 0;
    public volatile long count2 = 0;
}

Using @Contended on Fields

public class Counter1 {
    @jdk.internal.vm.annotation.Contended
    public volatile long count1 = 0;
    @jdk.internal.vm.annotation.Contended
    public volatile long count2 = 0;
}

Field Grouping

public class Counter1 {
    @jdk.internal.vm.annotation.Contended("group1")
    public volatile long count1 = 0;
    @jdk.internal.vm.annotation.Contended("group1")
    public volatile long count2 = 0;
    @jdk.internal.vm.annotation.Contended("group2")
    public volatile long count3 = 0;
}

In this example, count1 and count2 share a padding group, while count3 is placed in a separate group, preventing them from interfering with each other.

Configuring Padding Size

The default @Contended padding is 128 bytes, but you can adjust it with the JVM option -XX:ContendedPaddingWidth. For example: -XX:ContendedPaddingWidth=64 The appropriate padding depends on the hardware cache‑line size (commonly 64 bytes). Matching the padding to the actual cache‑line size ensures optimal avoidance of false sharing.

JavaPerformanceCacheconcurrencyfalse sharingContended
Cognitive Technology Team
Written by

Cognitive Technology Team

Cognitive Technology Team regularly delivers the latest IT news, original content, programming tutorials and experience sharing, with daily perks awaiting you.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.