Backend Development 24 min read

Deep Dive into Java ConcurrentHashMap: Implementation, Concurrency Mechanisms, and Core Methods

This article provides a comprehensive analysis of Java's ConcurrentHashMap, covering its historical evolution, key internal fields, the thread‑safe put and remove operations, the complex resizing and transfer mechanisms, and auxiliary methods such as size, get, and clear, all illustrated with original source code.

Architecture Digest
Architecture Digest
Architecture Digest
Deep Dive into Java ConcurrentHashMap: Implementation, Concurrency Mechanisms, and Core Methods

HashMap is a commonly used container in Java, but before JDK 1.8 it is not thread‑safe under high concurrency, leading to issues such as infinite loops during rehash. Starting with JDK 1.8, HashMap was redesigned to use an array‑plus‑linked‑list‑plus‑red‑black‑tree structure, and its rehash algorithm was changed to a copy‑based approach that avoids direct manipulation of the original chain.

ConcurrentHashMap is the concurrent version of HashMap, offering thread safety and superior performance under high contention. The article examines its implementation from several angles:

Historical version evolution

Important member fields

Implementation of the put method for concurrent insertion

Implementation of the remove method for concurrent deletion

Brief introductions to other common methods

1. Historical version evolution

In JDK 1.7, ConcurrentHashMap used segment locks, dividing the hash table into multiple Segment objects, each protected by its own lock, allowing concurrent access to different segments. In JDK 1.8, the segment‑lock design was removed in favor of CAS and synchronized control, and the underlying storage mirrors the JDK 1.8 HashMap implementation (array + linked list + red‑black tree).

2. Important member fields

transient volatile Node<K,V>[] table;

Represents the hash table, analogous to HashMap.

/**
 * The next table to use; non‑null only while resizing.
 */
private transient volatile Node<K,V>[] nextTable;

Used during resizing to hold the new table.

private transient volatile long baseCount;

Stores the total number of nodes, similar to HashMap’s size field.

private transient volatile int sizeCtl;

Controls initialization and resizing; its values have special meanings (0, -1, >0, < -1) that indicate the state of the table.

3. put method – concurrent addition

The public put simply delegates to putVal :

public V put(K key, V value) {
    return putVal(key, value, false);
}

putVal performs argument validation, computes the hash, and then enters a loop that either initializes the table, inserts a new node via CAS when the bucket is empty, or locks the bucket head to handle linked‑list or tree insertion. The method also assists with resizing when it encounters a ForwardingNode (hash value MOVED ).

final V putVal(K key, V value, boolean onlyIfAbsent) {
    if (key == null || value == null) throw new NullPointerException();
    int hash = spread(key.hashCode());
    for (Node<K,V>[] tab = table;;) {
        // initialization, empty bucket CAS, or lock‑based insertion
    }
    // after insertion, update count and possibly trigger resize
    addCount(1L, binCount);
    return null;
}

The insertion logic distinguishes three cases:

If the bucket is empty, a CAS inserts a new Node .

If the bucket contains a normal node, the bucket is synchronized, the list is traversed, and the new node is appended or the existing value is replaced.

If the bucket has been transformed into a red‑black tree ( TreeBin ), the tree insertion routine putTreeVal is used.

When the list length exceeds TREEIFY_THRESHOLD (8), it is converted to a tree.

4. Transfer and resizing

During resizing, each bucket is processed by one of the participating threads. The ForwardingNode marks a bucket that has already been migrated. The helpTransfer method allows a thread that encounters a forwarding node to assist the ongoing resize.

static final class ForwardingNode<K,V> extends Node<K,V> {
    final Node<K,V>[] nextTable;
    ForwardingNode(Node<K,V>[] tab) {
        super(MOVED, null, null, null);
        this.nextTable = tab;
    }
}

The core transfer method divides the table into chunks (controlled by stride ) and lets each thread move its assigned buckets to the new table, handling both linked‑list and tree buckets. After all buckets are processed, the old table reference is replaced with the new one and sizeCtl is updated.

5. remove method – concurrent deletion

The removal process mirrors put : locate the bucket, assist with resizing if a ForwardingNode is seen, lock the bucket, remove the target node, and finally call addCount with a negative delta to adjust the size.

6. Other common methods

size() aggregates baseCount and the values stored in CounterCell array to return the total number of entries.

public int size() {
    long n = sumCount();
    return (n < 0L) ? 0 : (n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int)n;
}

get(Object key) performs a lock‑free lookup by computing the hash and traversing the bucket or tree.

public V get(Object key) {
    Node<K,V>[] tab; Node<K,V> e; int n, eh; K ek;
    int h = spread(key.hashCode());
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (e = tabAt(tab, (n - 1) & h)) != null) {
        if ((eh = e.hash) == h) {
            if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                return e.val;
        } else if (eh < 0)
            return (e = e.find(h, key)) != null ? e.val : null;
        while ((e = e.next) != null) {
            if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek))))
                return e.val;
        }
    }
    return null;
}

clear() iterates over the table, locks each bucket, nulls the entries, and updates the count with a negative delta.

public void clear() {
    long delta = 0L; // negative number of deletions
    int i = 0;
    Node<K,V>[] tab = table;
    while (tab != null && i < tab.length) {
        Node<K,V> f = tabAt(tab, i);
        if (f == null) ++i;
        else if (f.hash == MOVED) {
            tab = helpTransfer(tab, f);
            i = 0;
        } else {
            synchronized (f) {
                if (tabAt(tab, i) == f) {
                    Node<K,V> p = (f.hash >= 0 ? f : (f instanceof TreeBin) ? ((TreeBin<K,V>)f).first : null);
                    while (p != null) { --delta; p = p.next; }
                    setTabAt(tab, i++, null);
                }
            }
        }
    }
    if (delta != 0L) addCount(delta, -1);
}

The article concludes that ConcurrentHashMap’s design, especially the cooperative resizing and fine‑grained locking, showcases sophisticated engineering to achieve high concurrency without sacrificing performance.

JavaConcurrencyJDKConcurrentHashMapdata-structures
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.