Backend Development 13 min read

Thread Reuse Pitfalls, ThreadLocal Misuse, and Proper Use of ConcurrentHashMap and CopyOnWriteArrayList

This article explains how thread reuse in a Tomcat thread pool can cause user‑information leakage when ThreadLocal is misused, analyzes the non‑atomic behavior of ConcurrentHashMap operations, demonstrates performance differences between locking and atomic methods, and warns against inappropriate use of CopyOnWriteArrayList in high‑write scenarios.

Selected Java Interview Questions
Selected Java Interview Questions
Selected Java Interview Questions
Thread Reuse Pitfalls, ThreadLocal Misuse, and Proper Use of ConcurrentHashMap and CopyOnWriteArrayList

1 Thread Reuse Causes User Information Mix‑up

In production, a request sometimes receives another user's information because the code caches user data in a ThreadLocal . ThreadLocal isolates variables per thread, but when the thread pool reuses a thread, leftover data from a previous request can be read, leading to incorrect user data.

1.1 Example

An Integer stored in ThreadLocal is initially null. The code reads the value, sets the external parameter into ThreadLocal, reads again, and prints both values with the thread name. Because Tomcat uses a fixed‑size thread pool, the same thread may retain the previous request's value, causing the first read to return stale data.

1.2 Bug Reproduction

Set Tomcat's max worker threads to 1 so the same thread always handles requests:

server.tomcat.max-threads=1

Request from user 1 returns null then 1 as expected. Request from user 2 returns 1 then 2 ; the first read mistakenly gets user 1's ID because the thread was reused.

1.3 Solution

Explicitly clear the ThreadLocal value in a finally block. After the fix, reused threads no longer expose stale user data.

1.4 Can ThreadLocalRandom be stored in a static variable and reused across threads?

ThreadLocalRandom initializes a seed per thread; other threads cannot see the seed set by the main thread, so each thread must initialize its own seed.

2 Is ConcurrentHashMap Really Safe?

ConcurrentHashMap provides thread‑safe atomic read/write operations, but it does not guarantee atomicity for compound actions such as checking size() then inserting.

2.1 Example

A map with 900 entries is to be filled to 1000 entries by 10 concurrent threads. Each thread reads size() , computes the remaining count, logs it, and calls putAll . The final map size ends up at 1549, not 1000, because the size check and putAll are not atomic.

2.2 Analysis

ConcurrentHashMap does not make a series of operations atomic; external synchronization is required for such logic.

Aggregating methods like size() , isEmpty() , containsValue() may return intermediate states under concurrency and should not be used for flow control.

putAll is also non‑atomic; partial updates can be observed.

2.3 Solution

Wrap the whole update logic in a lock so only one thread performs the size check and insertion. After locking, only one thread adds the missing 100 entries, and the final map size is exactly 1000.

3 Knowing the Tools to Win the Battle

3.1 Counting Keys with a Map

Using a ConcurrentHashMap to count occurrences of keys (range 0‑9) with up to 10 concurrent threads performing 10 million increments.

Initial naive implementation locks the whole map for each update, which defeats the purpose of ConcurrentHashMap.

3.2 Performance Test

A StopWatch compares the locked version with a version that uses computeIfAbsent to create a LongAdder per key and then calls increment() . The atomic version is at least five times faster.

3.3 High‑Performance computeIfAbsent

Java's Unsafe CAS ensures atomic writes at the JVM level, making computeIfAbsent much more efficient than explicit locking.

Distinguishing computeIfAbsent and putIfAbsent

If the value is expensive to obtain, putIfAbsent may waste time computing it even when the key already exists.

When the key is absent, putIfAbsent returns null (risking NPE), while computeIfAbsent returns the computed value.

putIfAbsent allows a null value (for HashMap), but ConcurrentHashMap forbids nulls, making their behavior diverge.

3.4 The Downside of CopyOnWriteArrayList

Using CopyOnWriteArrayList for frequently changing data leads to severe write‑performance degradation because each modification copies the entire array. It is suitable only for read‑heavy, write‑light scenarios.

CopyOnWriteArrayList vs. Locked ArrayList

Benchmarks show CopyOnWriteArrayList is hundreds of times slower for concurrent writes but up to 24× faster for concurrent reads.

4 Summary

4.1 Don't

Rely solely on concurrency utilities without understanding thread fundamentals.

Assume using a concurrent collection automatically guarantees thread safety for all operations.

Select tools without considering the specific business scenario.

4.2 Do

Read official documentation, understand applicable scenarios, test thoroughly before adoption.

Perform performance and stress testing for concurrency bugs.

Reference: https://zhuanlan.zhihu.com/p/201333611

performance testingConcurrentHashMapthread poolthreadlocalJava Concurrencycopyonwritearraylist
Selected Java Interview Questions
Written by

Selected Java Interview Questions

A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.