Comprehensive Guide to Java Multithreading: Concepts, APIs, and Best Practices
This article provides an in‑depth overview of Java multithreading, covering fundamental concepts such as processes, threads, parallelism vs concurrency, thread creation methods, lifecycle states, synchronization mechanisms, memory model nuances, lock implementations, common pitfalls like deadlocks, and practical usage of thread pools with best‑practice recommendations.
Introduction
Java multithreading is often regarded as the most challenging part of Java SE. This guide uses examples, diagrams, and source code to explain the core ideas and practical usage.
What Is Java Multithreading?
Process : When a program runs, the operating system creates a process that loads instructions and data into memory.
Thread : A process can contain multiple threads, each representing an independent instruction stream scheduled by the CPU.
Parallel vs. Concurrent
Concurrency occurs when a single‑core CPU rapidly switches between threads (time‑slicing). Parallelism occurs on multi‑core CPUs where threads truly run at the same time.
Why Use Multithreading?
Improves program throughput.
Utilises multi‑core CPUs efficiently.
Challenges of Multithreading
Non‑deterministic execution results due to scheduling.
Thread‑safety issues.
Thread‑pool configuration and resource management.
Dynamic execution flow makes debugging difficult.
Underlying OS‑level implementation complexity.
Basic Thread Usage
Defining Tasks
Three common ways to define a task:
Extend Thread class.
Implement Runnable interface.
Implement Callable interface (used with FutureTask ).
@Slf4j
class T extends Thread {
@Override
public void run() {
log.info("I am a Thread‑extended task");
}
}
class R implements Runnable {
@Override
public void run() {
log.info("I am a Runnable task");
}
}
class C implements Callable
{
@Override
public String call() throws Exception {
log.info("I am a Callable task");
return "success";
}
}Creating and Starting Threads
// Start a Thread‑extended task
new T().start();
// Start a Runnable task (lambda version)
new Thread(() -> log.info("I am a lambda Runnable task")).start();
// Start a Callable task via FutureTask
FutureTask
ft = new FutureTask<>(new C());
new Thread(ft).start();
log.info(ft.get());Thread Lifecycle and States
From the OS perspective a thread can be in five states: New, Runnable (ready), Running, Blocked, and Terminated. The Java API defines six states via Thread.State : NEW, RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED.
Context Switching
On multi‑core CPUs, threads run in parallel; when the number of threads exceeds cores, the OS performs context switches, saving the current thread's state (program counter) and restoring another's.
Yield and Priority
The Thread.yield() method voluntarily moves the current thread to the ready queue, allowing other threads to acquire CPU time slices. Thread priority ranges from 1 (MIN) to 10 (MAX) and influences scheduling when the CPU is busy.
Daemon Threads
Daemon threads do not prevent JVM termination; when all non‑daemon threads finish, the JVM exits even if daemon threads are still running.
Blocking Operations
sleep() – pauses the thread for a specified time.
join() – waits for another thread to finish.
wait()/notify()/notifyAll() – object‑level communication requiring the monitor lock.
LockSupport.park()/unpark() – JUC utilities that do not require holding a lock.
Synchronization and Thread Safety
Shared mutable state can cause race conditions. The synchronized keyword provides intrinsic locking on objects (or class objects for static methods) to guarantee atomicity of critical sections.
private synchronized void a() { /* ... */ }
private void b() {
synchronized(this) { /* ... */ }
}
private static synchronized void c() { /* ... */ }Only one thread can hold the lock at a time; other threads block until the lock is released.
Wait/Notify Example
Object lock = new Object();
new Thread(() -> {
synchronized(lock) {
log.info("Thread 1 waiting");
lock.wait();
log.info("Thread 1 resumed");
}
}).start();
Thread.sleep(2000);
synchronized(lock) {
lock.notifyAll();
}ReentrantLock
Provides explicit lock control with additional features such as timeout, interruptibility, fairness, and multiple condition variables.
private static final ReentrantLock LOCK = new ReentrantLock();
private static void m() {
LOCK.lock();
try {
log.info("begin");
m1();
} finally {
LOCK.unlock();
}
}
private static void m1() {
LOCK.lock();
try {
log.info("m1");
m2();
} finally {
LOCK.unlock();
}
}Advanced Topics
Java Memory Model (JMM)
JMM guarantees three properties:
Atomicity : Operations appear indivisible.
Visibility : Changes made by one thread become visible to others.
Ordering : Prevents re‑ordering of instructions that could break program semantics.
The volatile keyword provides visibility and ordering guarantees via memory barriers, but does not ensure atomicity.
CAS and Atomic Classes
Compare‑And‑Swap (CAS) enables lock‑free updates. Java provides atomic classes such as AtomicInteger , AtomicReference , etc.
AtomicInteger balance = new AtomicInteger(1000);
while (true) {
int prev = balance.get();
int next = prev - amount;
if (balance.compareAndSet(prev, next)) break;
}CAS suffers from the ABA problem, which can be mitigated using AtomicStampedReference or versioned objects.
Thread Pools
Thread pools reuse a fixed number of worker threads to execute submitted tasks, reducing thread‑creation overhead and controlling resource usage.
ThreadPoolExecutor executor = new ThreadPoolExecutor(
corePoolSize, // e.g., 2
maximumPoolSize, // e.g., 5
60L, TimeUnit.SECONDS, // idle thread timeout
new LinkedBlockingQueue<>(),
Executors.defaultThreadFactory(),
new ThreadPoolExecutor.AbortPolicy() // default rejection policy
);Key parameters:
corePoolSize : Number of core threads kept alive.
maximumPoolSize : Upper bound of threads (including temporary "emergency" threads).
keepAliveTime : Idle time after which non‑core threads are terminated.
workQueue : Blocking queue that holds pending tasks.
RejectedExecutionHandler : Strategy when the pool cannot accept new tasks.
Common factory methods (e.g., Executors.newFixedThreadPool , newCachedThreadPool ) are convenient but often unsuitable for production because of unbounded queues or uncontrolled thread growth.
Best Practices for Thread Pools
Configure corePoolSize , maximumPoolSize , and queue capacity via external configuration to allow runtime tuning.
Monitor pool metrics (active threads, queue size, rejected tasks) and adjust parameters accordingly.
For I/O‑bound workloads, consider a larger pool (e.g., core × 50 ~ 100) and a small bounded queue.
Prefer a custom rejection policy (e.g., Tomcat’s CallerRunsPolicy ) that gives the submitting thread a chance to execute the task.
When using SynchronousQueue , be aware that tasks are handed off directly to a thread; if none is available, a new thread is created up to maximumPoolSize .
Conclusion
Understanding Java multithreading—from low‑level thread states to high‑level executors—is essential for building scalable, responsive applications. By mastering synchronization primitives, the Java Memory Model, lock‑free techniques, and proper thread‑pool configuration, developers can avoid common pitfalls such as race conditions, deadlocks, and performance bottlenecks.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.