Core Java Multithreading Interview Questions and Concepts
This article compiles 16 classic Java multithreading interview questions covering high‑concurrency containers, CAS and ABA problems, volatile vs synchronized, the Java Memory Model, ThreadLocal drawbacks, AQS, thread‑pool architecture, queue types, sizing strategies, rejection policies, lifecycle management, and graceful shutdown techniques.
Multithreading Interview Questions Summary
This section lists 16 classic Java multithreading interview questions, ranging from high‑concurrency containers to thread‑pool lifecycle.
Java High‑Concurrency Containers
Synchronous Containers
Before Java 1.5, thread‑safe containers relied on synchronized methods, which suffer from poor performance due to high contention.
Concurrent Containers
Since Java 1.5, the JDK provides high‑performance concurrent containers (List, Map, Set, Queue) that avoid the drawbacks of synchronized containers.
CAS Principle
CAS (Compare‑And‑Swap) is used in atomic classes such as AtomicInteger . Example:
atomicInteger.compareAndSet(10, 20);CAS provides atomic updates but can cause spin‑wait overhead and only works for a single variable.
ABA Problem
The ABA issue occurs when a value changes from A to B and back to A, making CAS think nothing changed. The solution is to use versioned references like AtomicStampedReference with its compareAndSet method.
volatile vs synchronized
volatile characteristics
Guarantees visibility across threads.
Prevents instruction reordering for a single thread.
Does not guarantee atomicity for compound actions (e.g., a++).
64‑bit long/double reads/writes are atomic.
Useful for double‑checked locking and status flags.
Comparison
volatile can only modify fields; synchronized can protect methods/blocks.
volatile is non‑blocking; synchronized may block.
volatile is lightweight; synchronized is heavyweight.
Both ensure visibility and ordering.
Java Memory Model (JMM)
JMM defines how variables are stored and accessed in main memory and each thread's working memory, ensuring visibility, atomicity, and ordering.
Key Concepts
Main memory stores all variables.
Working memory holds thread‑local copies.
Threads read/write via main memory.
Three main properties: visibility, atomicity, ordering.
ThreadLocal Drawbacks
In thread‑pool environments, reused threads may retain stale ThreadLocal values, leading to memory leaks or incorrect behavior. It is recommended to remove ThreadLocal entries in a finally block.
AbstractQueuedSynchronizer (AQS)
AQS provides a framework for building synchronizers (e.g., ReentrantLock, Semaphore) using a volatile state, FIFO wait queue, and CAS‑based acquire/release methods.
Thread Pool Fundamentals
Benefits
Reduces thread creation overhead.
Improves response time.
Enhances manageability of threads.
Core Parameters
maximumPoolSize
corePoolSize
keepAliveTime
workQueue (ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue, PriorityBlockingQueue)
RejectedExecutionHandler
Execution Flow
Tasks up to corePoolSize create new threads.
Excess tasks queue in the work queue.
If the queue is full, additional threads up to maximumPoolSize are created.
When maximumPoolSize is reached and the queue is full, the rejection policy is applied.
Blocking Queue Types
ArrayBlockingQueue – bounded, FIFO.
LinkedBlockingQueue – optionally bounded, high throughput.
SynchronousQueue – hand‑off queue, no storage.
PriorityBlockingQueue – unbounded priority queue.
Thread Count Recommendations
CPU‑bound: threads ≈ number of CPU cores (or cores + 1). I/O‑bound: threads = cores × (1 + I/O‑time/CPU‑time).
Rejection Policies
CallerRunsPolicy
AbortPolicy
DiscardPolicy
DiscardOldestPolicy
Lifecycle States
RUNNING
SHUTDOWN
STOP
TIDYING
TERMINATED
Thread Pool Types (Executors)
newCachedThreadPool()
newFixedThreadPool(int)
newSingleThreadExecutor()
newSingleThreadScheduledExecutor() / newScheduledThreadPool(int)
newWorkStealingPool(int)
Graceful Shutdown
Use shutdown() for a gentle stop (no new tasks, finish existing). Use shutdownNow() for an aggressive stop (interrupt running tasks, discard queued tasks).
Monitoring Thread Pools
Example monitoring method prints pool size, active threads, completed tasks, and queue size every second:
private void printStats(ThreadPoolExecutor threadPool) {
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
log.info("=========================");
log.info("Pool Size: {}", threadPool.getPoolSize());
log.info("Active Threads: {}", threadPool.getActiveCount());
log.info("Completed Tasks: {}", threadPool.getCompletedTaskCount());
log.info("Queue Size: {}", threadPool.getQueue().size());
log.info("=========================");
}, 0, 1, TimeUnit.SECONDS);
}Continue following for upcoming Redis and MySQL series.
Wukong Talks Architecture
Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.