Why Executors Should Not Be Used to Create Thread Pools and How to Properly Configure ThreadPoolExecutor in Java
This article explains the definition of thread pools, why using Executors to create them is discouraged, details the ThreadPoolExecutor constructor and its parameters, compares different Executors factory methods, demonstrates OOM testing, and provides practical guidelines for defining safe thread‑pool parameters in Java.
Introduction
First of all, thank you for reading this article. By the end of this article you will understand:
What a thread pool is
Various ways Executors creates thread pools
The ThreadPoolExecutor class
The relationship between thread‑pool execution logic and its parameters
How Executors returns a ThreadPoolExecutor object
An OOM (OutOfMemoryError) test case
How to define thread‑pool parameters
If you only want the reason, you can jump directly to the summary.
Thread‑Pool Definition
A thread pool manages a group of worker threads. Reusing threads brings several advantages:
Reduces resource creation → lowers memory overhead because creating a thread consumes memory.
Decreases system overhead → thread creation takes time and can delay request processing.
Improves stability → prevents unlimited thread creation that leads to OutOfMemoryError (OOM).
Ways Executors Creates Thread Pools
Creating thread pools via Executors can be divided into three categories based on the returned object type:
Returns a ThreadPoolExecutor object
Returns a ScheduledThreadPoolExecutor object
Returns a ForkJoinPool object
This article only discusses the first case – creating a ThreadPoolExecutor object.
ThreadPoolExecutor Object
Before introducing the Executors factory methods, we first look at ThreadPoolExecutor because all static factory methods of Executors ultimately return this type, and using them spares us from manually providing constructor arguments.
The constructor signature is:
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)Parameter meanings:
corePoolSize → number of core threads
maximumPoolSize → maximum number of threads
keepAliveTime → idle thread keep‑alive time
unit → time unit for keepAliveTime
workQueue → the queue used to hold tasks
threadFactory → factory that creates new threads
handler → policy for handling rejected tasks
Thread‑Pool Execution Logic and Parameter Relationship
The execution flow is:
Check whether the number of core threads is full (controlled by corePoolSize ). If not full, create a core thread to execute the task.
If core threads are full, check whether the queue is full (controlled by workQueue ). If the queue is not full, enqueue the task.
If the queue is full, check whether the pool can create more threads (controlled by maximumPoolSize ). If not full, create a non‑core thread to execute the task.
If the pool is also full, apply the rejection policy (controlled by handler ).
Executors Methods That Return ThreadPoolExecutor
There are three static methods that return a ThreadPoolExecutor :
Executors#newCachedThreadPool → creates a cached thread pool
Executors#newSingleThreadExecutor → creates a single‑thread pool
Executors#newFixedThreadPool → creates a fixed‑size thread pool
Executors#newCachedThreadPool
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}This pool creates new threads as needed. Key parameter values:
corePoolSize = 0
maximumPoolSize = Integer.MAX_VALUE (practically unlimited)
keepAliveTime = 60 seconds
workQueue = SynchronousQueue (no storage, always considered full)
When a task is submitted, because the core size is 0, no core thread is created; the task is handed to a non‑core thread via the always‑full queue.
Idle non‑core threads are terminated after 60 seconds. The virtually unlimited thread count can easily cause OOM under limited resources.
Executors#newSingleThreadExecutor
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService(
new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}This creates a single‑thread pool with one core thread.
corePoolSize = 1
maximumPoolSize = 1
keepAliveTime = 0
workQueue = LinkedBlockingQueue (unbounded)
Tasks are first executed by the core thread; excess tasks are placed into an unbounded queue, which can cause OOM when resources are scarce. Because the queue is unbounded, maximumPoolSize and keepAliveTime become ineffective.
Executors#newFixedThreadPool
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}This creates a pool with a fixed number of core threads (specified by nThreads ).
corePoolSize = nThreads
maximumPoolSize = nThreads
keepAliveTime = 0
workQueue = LinkedBlockingQueue (unbounded)
Behaviour is similar to SingleThreadExecutor but with more core threads.
Summary
Both FixedThreadPool and SingleThreadExecutor use an unbounded queue (capacity = Integer.MAX_VALUE ), which can accumulate a huge number of pending tasks and cause OOM.
CachedThreadPool can create an unlimited number of threads, also leading to OOM under limited resources.
Therefore, using Executors to create thread pools is discouraged; it is recommended to instantiate ThreadPoolExecutor directly with carefully chosen parameters.
OOM Test
To verify the theory, the following test class creates an infinite number of tasks using a cached thread pool:
public class TaskTest {
public static void main(String[] args) {
ExecutorService es = Executors.newCachedThreadPool();
int i = 0;
while (true) {
es.submit(new Task(i++));
}
}
}Run the program with a very small heap (e.g., -Xms10M -Xmx10M ) to trigger OOM quickly. The result is an OutOfMemoryError after creating tens of thousands of threads.
Similar tests can be performed for the other two pool types; they also lead to OOM because of unbounded queues.
How to Define Thread‑Pool Parameters
CPU‑intensive workloads: Recommended pool size = CPU count + 1 (obtainable via Runtime.availableProcessors() ).
IO‑intensive workloads: Recommended size = CPU count * CPU utilization * (1 + waitTime / cpuTime) .
Mixed workloads: Separate tasks into CPU‑intensive and IO‑intensive groups and handle each with its own pool.
Work queue: Prefer a bounded queue to avoid resource exhaustion.
Rejection policy: The default AbortPolicy throws RejectedExecutionException , which is not graceful. Recommended alternatives include:
Catching RejectedExecutionException and handling the task manually.
Using CallerRunsPolicy to run the rejected task in the calling thread.
Implementing a custom RejectedExecutionHandler .
For low‑priority tasks, using DiscardPolicy or DiscardOldestPolicy to drop tasks.
If you still want to use Executors' static methods, you can apply a Semaphore to limit the number of concurrent submissions and thus avoid OOM.
Because practical experience with thread‑pool tuning is limited, contributions from experienced developers are welcome.
Source: juejin.im/post/5dc41c165188257bad4d9e69
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.