Unlocking Java’s ConcurrentHashMap: Inside Constructors, Put, and Resizing

This article provides a thorough, English‑language walkthrough of Java 8+ ConcurrentHashMap’s source code, covering its constructor logic, load‑factor handling, CAS‑based insertion, node optimizations, the put method, element‑count updates, multi‑threaded resizing, treeification, and auxiliary operations such as get, remove, and computeIfAbsent.

JD Cloud Developers
JD Cloud Developers
JD Cloud Developers
Unlocking Java’s ConcurrentHashMap: Inside Constructors, Put, and Resizing

Below is a detailed explanation of the source implementation of ConcurrentHashMap, a thread‑safe hash table whose design aims to minimise update contention while keeping read concurrency high and space usage comparable to HashMap. In Java 8 and later it relies on CAS, synchronized, spin‑retry loops and volatile to guarantee safety.

Constructor

The constructor receives an initial capacity, a load factor and a concurrency level. It validates the arguments, adjusts the capacity to be at least the concurrency level, computes the target size using the load factor, rounds it up to the next power‑of‑two and stores the result in sizeCtl. The default size is 16 and the default load factor is 0.75F, which cannot be changed.

public class ConcurrentHashMap<K,V> extends AbstractMap<K,V> implements ConcurrentMap<K,V>, Serializable {
    private transient volatile int sizeCtl;
    private static final int MAXIMUM_CAPACITY = 1 << 30;
    /**
     * Constructor called by all other constructors.
     * @param initialCapacity the initial size
     * @param loadFactor the load factor (not stored, only used for size calculation)
     * @param concurrencyLevel the desired level of concurrency
     */
    public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (initialCapacity < concurrencyLevel)
            initialCapacity = concurrencyLevel;
        long size = (long)(1.0 + (long)initialCapacity / loadFactor);
        int cap = (size >= (long)MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : tableSizeFor((int)size);
        this.sizeCtl = cap;
    }
    // round up to power of two
    private static final int tableSizeFor(int c) {
        int n = -1 >>> Integer.numberOfLeadingZeros(c - 1);
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
}

Load factor

The load factor is calculated locally as size and never stored; all subsequent capacity‑threshold calculations use the default 0.75F. This choice follows the JDK documentation and keeps the implementation simple.

Put method

The core insertion method combines a CAS operation to place the first node in an empty bucket with a synchronized block for subsequent nodes. It also updates the element count via addCount and triggers treeification when a bucket exceeds the threshold.

public V put(K key, V value) {
    return putVal(key, value, false);
}
final V putVal(K key, V value, boolean onlyIfAbsent) {
    if (key == null || value == null) throw new NullPointerException();
    int hash = spread(key.hashCode());
    int binCount = 0;
    for (Node<K,V>[] tab = table;;) {
        Node<K,V> f; int n, i, fh; K fk; V fv;
        if (tab == null || (n = tab.length) == 0)
            tab = initTable();
        else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
            if (casTabAt(tab, i, null, new Node<>(hash, key, value)))
                break; // successful CAS
        } else if ((fh = f.hash) == MOVED)
            tab = helpTransfer(tab, f);
        else {
            V oldVal = null;
            synchronized (f) {
                if (tabAt(tab, i) == f) {
                    if (fh >= 0) { // linked list
                        binCount = 1;
                        for (Node<K,V> e = f;; ++binCount) {
                            K ek;
                            if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) {
                                oldVal = e.val;
                                if (!onlyIfAbsent) e.val = value;
                                break;
                            }
                            Node<K,V> pred = e;
                            if ((e = e.next) == null) {
                                pred.next = new Node<>(hash, key, value);
                                break;
                            }
                        }
                    } else if (f instanceof TreeBin) {
                        // tree insertion (omitted for brevity)
                    }
                }
            }
            if (binCount != 0) {
                if (binCount >= TREEIFY_THRESHOLD)
                    treeifyBin(tab, i);
                if (oldVal == null)
                    addCount(1L, binCount);
                return oldVal;
            }
        }
    }
    addCount(1L, binCount);
    return null;
}

addCount (element‑count update)

The method first tries to update baseCount with a CAS. If that fails or counterCells already exists, it falls back to updating a cell in the CounterCell[] array, creating or expanding the array as needed. After the count is increased, it checks whether resizing is required.

private final void addCount(long x, int check) {
    CounterCell[] cs; long b, s;
    if ((cs = counterCells) != null || !U.compareAndSetLong(this, BASECOUNT, b = baseCount, s = b + x)) {
        CounterCell c; long v; int m;
        boolean uncontended = true;
        if (cs == null || (m = cs.length - 1) < 0 || (c = cs[ThreadLocalRandom.getProbe() & m]) == null ||
            !(uncontended = U.compareAndSetLong(c, CELLVALUE, v = c.value, v + x))) {
            fullAddCount(x, uncontended);
            return;
        }
        if (check <= 1) return;
        s = sumCount();
    }
    if (check >= 0) {
        Node<K,V>[] tab, nt; int n, sc;
        while (s >= (long)(sc = sizeCtl) && (tab = table) != null && (n = tab.length) < MAXIMUM_CAPACITY) {
            int rs = resizeStamp(n) << RESIZE_STAMP_SHIFT;
            if (sc < 0) { // already resizing
                // help transfer
                break;
            }
            if (U.compareAndSetInt(this, SIZECTL, sc, rs + 2))
                transfer(tab, null);
        }
    }
}

Resizing (transfer)

Resizing is performed by the transfer method, which creates a new table twice as large and moves entries bucket by bucket. Multiple threads cooperate by claiming a range of buckets using the global transferIndex and a stride calculated from the number of CPUs. Each thread processes its range, converting linked‑list buckets to red‑black trees when necessary.

private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
    int n = tab.length, stride;
    if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE)
        stride = MIN_TRANSFER_STRIDE;
    if (nextTab == null) {
        try {
            Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n << 1];
            nextTab = nt;
        } catch (Throwable ex) {
            sizeCtl = Integer.MAX_VALUE;
            return;
        }
        nextTable = nextTab;
        transferIndex = n;
    }
    int nextn = nextTab.length;
    ForwardingNode<K,V> fwd = new ForwardingNode<>(nextTab);
    boolean advance = true, finishing = false;
    for (int i = 0, bound = 0; ; ) {
        Node<K,V> f; int fh;
        while (advance) {
            int nextIndex, nextBound;
            if (--i >= bound || finishing) {
                advance = false;
            } else if ((nextIndex = transferIndex) <= 0) {
                i = -1; advance = false;
            } else if (U.compareAndSetInt(this, TRANSFERINDEX, nextIndex,
                    nextBound = (nextIndex > stride ? nextIndex - stride : 0))) {
                bound = nextBound;
                i = nextIndex - 1;
                advance = false;
            }
        }
        if (i < 0 || i >= n) {
            int sc;
            if (finishing) {
                nextTable = null;
                table = nextTab;
                sizeCtl = (n << 1) - (n >>> 1);
                return;
            }
            if (U.compareAndSetInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
                    return;
                finishing = advance = true;
                i = n;
            }
        } else if ((f = tabAt(tab, i)) == null) {
            advance = casTabAt(tab, i, null, fwd);
        } else if ((fh = f.hash) == MOVED) {
            advance = true;
        } else {
            synchronized (f) {
                if (tabAt(tab, i) == f) {
                    if (fh >= 0) {
                        // split list nodes into low and high based on hash bit
                        // (code omitted for brevity)
                    } else if (f instanceof TreeBin) {
                        // split tree nodes (code omitted)
                    }
                }
            }
        }
    }
}

Treeification (treeifyBin)

When a bucket’s linked list exceeds TREEIFY_THRESHOLD (8) and the table size is at least 64, the list is converted into a red‑black tree to improve lookup performance.

private final void treeifyBin(Node<K,V>[] tab, int index) {
    Node<K,V> b; int n;
    if (tab != null) {
        if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
            tryPresize(n << 1);
        else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
            synchronized (b) {
                if (tabAt(tab, index) == b) {
                    TreeNode<K,V> hd = null, tl = null;
                    for (Node<K,V> e = b; e != null; e = e.next) {
                        TreeNode<K,V> p = new TreeNode<>(e.hash, e.key, e.val, null, null);
                        if ((p.prev = tl) == null) hd = p; else tl.next = p;
                        tl = p;
                    }
                    setTabAt(tab, index, new TreeBin<>(hd));
                }
            }
        }
    }
}

Get method

The get operation first computes the hash, locates the bucket, and then either returns the value directly (if the first node matches), delegates to the node’s find method for tree or forwarding nodes, or walks the linked list.

public V get(Object key) {
    Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
    int h = spread(key.hashCode());
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (e = tabAt(tab, (n - 1) & h)) != null) {
        if ((eh = e.hash) == h) {
            if ((ek = e.key) == key || (ek != null && key.equals(ek)))
                return e.val;
        } else if (eh < 0)
            return (p = e.find(h, key)) != null ? p.val : null;
        while ((e = e.next) != null) {
            if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek))))
                return e.val;
        }
    }
    return null;
}

Remove method

Removal validates the key, locates the bucket, synchronises on the first node, and either unlinks a list node or removes a tree node, updating the element count via addCount when the map size changes.

public V remove(Object key) {
    return replaceNode(key, null, null);
}
final V replaceNode(Object key, V value, Object cv) {
    int hash = spread(key.hashCode());
    for (Node<K,V>[] tab = table;;) {
        Node<K,V> f; int n, i, fh;
        if (tab == null || (n = tab.length) == 0 ||
            (f = tabAt(tab, i = (n - 1) & hash)) == null)
            break;
        if ((fh = f.hash) == MOVED)
            tab = helpTransfer(tab, f);
        else {
            V oldVal = null; boolean validated = false;
            synchronized (f) {
                if (tabAt(tab, i) == f) {
                    if (fh >= 0) {
                        validated = true;
                        for (Node<K,V> e = f, pred = null;;) {
                            K ek;
                            if (e.hash == hash && ((ek = e.key) == key || (ek != null && key.equals(ek)))) {
                                V ev = e.val;
                                if (cv == null || cv == ev || (ev != null && cv.equals(ev))) {
                                    oldVal = ev;
                                    if (value != null) e.val = value;
                                    else if (pred != null) pred.next = e.next;
                                    else setTabAt(tab, i, e.next);
                                }
                                break;
                            }
                            pred = e;
                            if ((e = e.next) == null) break;
                        }
                    } else if (f instanceof TreeBin) {
                        validated = true;
                        TreeBin<K,V> t = (TreeBin<K,V>)f;
                        TreeNode<K,V> r, p;
                        if ((r = t.root) != null && (p = r.findTreeNode(hash, key, null)) != null) {
                            V pv = p.val;
                            if (cv == null || cv == pv || (pv != null && cv.equals(pv))) {
                                oldVal = pv;
                                if (value != null) p.val = value;
                                else if (t.removeTreeNode(p))
                                    setTabAt(tab, i, untreeify(t.first));
                            }
                        }
                    } else if (f instanceof ReservationNode)
                        throw new IllegalStateException("Recursive update");
                }
            }
            if (validated) {
                if (oldVal != null) {
                    if (value == null) addCount(-1L, -1);
                    return oldVal;
                }
                break;
            }
        }
    }
    return null;
}

computeIfAbsent method

This method demonstrates the use of ReservationNode as a placeholder while the mapping function computes a value. It handles the common cases of an empty bucket, a matching first node, a linked list, or a tree, and performs treeification when the bin size reaches the threshold.

public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) {
    if (key == null || mappingFunction == null)
        throw new NullPointerException();
    int h = spread(key.hashCode());
    V val = null; int binCount = 0;
    for (Node<K,V>[] tab = table;;) {
        Node<K,V> f; int n, i, fh; K fk; V fv;
        if (tab == null || (n = tab.length) == 0)
            tab = initTable();
        else if ((f = tabAt(tab, i = (n - 1) & h)) == null) {
            Node<K,V> r = new ReservationNode<>();
            synchronized (r) {
                if (casTabAt(tab, i, null, r)) {
                    binCount = 1;
                    Node<K,V> node = null;
                    try {
                        if ((val = mappingFunction.apply(key)) != null)
                            node = new Node<>(h, key, val);
                    } finally {
                        setTabAt(tab, i, node);
                    }
                }
            }
            if (binCount != 0) break;
        } else if ((fh = f.hash) == MOVED)
            tab = helpTransfer(tab, f);
        else if (fh == h && ((fk = f.key) == key || (fk != null && key.equals(fk))) && (fv = f.val) != null)
            return fv;
        else {
            boolean added = false;
            synchronized (f) {
                if (tabAt(tab, i) == f) {
                    if (fh >= 0) {
                        binCount = 1;
                        for (Node<K,V> e = f;; ++binCount) {
                            K ek;
                            if (e.hash == h && ((ek = e.key) == key || (ek != null && key.equals(ek)))) {
                                val = e.val; break;
                            }
                            Node<K,V> pred = e;
                            if ((e = e.next) == null) {
                                if ((val = mappingFunction.apply(key)) != null) {
                                    added = true;
                                    pred.next = new Node<>(h, key, val);
                                }
                                break;
                            }
                        }
                    } else if (f instanceof TreeBin) {
                        TreeBin<K,V> t = (TreeBin<K,V>)f;
                        TreeNode<K,V> r, p;
                        if ((r = t.root) != null && (p = r.findTreeNode(h, key, null)) != null)
                            val = p.val;
                        else if ((val = mappingFunction.apply(key)) != null) {
                            added = true;
                            t.putTreeVal(h, key, val);
                        }
                    } else if (f instanceof ReservationNode)
                        throw new IllegalStateException("Recursive update");
                }
            }
            if (binCount != 0) {
                if (binCount >= TREEIFY_THRESHOLD)
                    treeifyBin(tab, i);
                if (!added) return val;
                break;
            }
        }
    }
    if (val != null) addCount(1L, binCount);
    return val;
}

Overall, the source shows how ConcurrentHashMap balances lock‑free reads with fine‑grained locking for updates, uses CAS and spin‑retry loops to minimise contention, and dynamically switches between linked lists and red‑black trees to maintain O(1) expected performance even under high concurrency.

putTreeVal adds a node to a red‑black tree; its implementation follows the classic tree‑insertion algorithm and is omitted here for brevity.

Understanding these mechanisms helps developers write high‑performance concurrent code and appreciate the design trade‑offs made in the JDK.

Technical diagram
Technical diagram

Scan the QR code to join the technical discussion group

JavaconcurrencymultithreadingConcurrentHashMapData Structures
JD Cloud Developers
Written by

JD Cloud Developers

JD Cloud Developers (Developer of JD Technology) is a JD Technology Group platform offering technical sharing and communication for AI, cloud computing, IoT and related developers. It publishes JD product technical information, industry content, and tech event news. Embrace technology and partner with developers to envision the future.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.