Backend Development 8 min read

ThreadPoolExecutor Self-Introduction: corePoolSize, workQueue, maximumPoolSize, RejectedExecutionHandler, keepAliveTime

This article explains how Java's ThreadPoolExecutor manages threads and tasks by describing corePoolSize, workQueue behavior, maximumPoolSize, rejection policies, and keepAliveTime, highlighting the costs of thread creation and the importance of bounded queues to avoid OOM.

Java Captain
Java Captain
Java Captain
ThreadPoolExecutor Self-Introduction: corePoolSize, workQueue, maximumPoolSize, RejectedExecutionHandler, keepAliveTime

ThreadPool Self-Introduction

I am a ThreadPoolExecutor that manages multiple threads to execute tasks concurrently while minimizing system overhead; thread creation incurs costly kernel calls and memory usage, so a pool reuses threads instead of creating a new one for each task.

Java threads map to native OS threads, requiring system calls for creation, destruction, and synchronization, which are expensive.

Each thread consumes kernel resources such as stack memory (default 1 MB), so creating many threads can quickly exhaust memory.

corePoolSize

The pool keeps a set of core threads (e.g., 3) that stay alive to reduce creation and destruction time; tasks are submitted via the Executor interface's execute(Runnable command) method.

When the pool starts with no core threads, the first task triggers the creation of one core thread; subsequent tasks create additional core threads until the corePoolSize is reached.

Even if a core thread is idle, the pool will create a new core thread as long as the current number of core threads is less than corePoolSize, ensuring the pool quickly reaches its configured capacity.

workQueue

After core threads are full, incoming tasks are placed into a workQueue, typically a blocking queue, so idle core threads can fetch tasks without busy‑waiting.

This follows the producer‑consumer model; a blocked thread does not consume CPU resources.

If an unbounded queue is used, tasks may accumulate faster than they are consumed, leading to OutOfMemoryError (OOM).

Using a bounded queue prevents unbounded growth; when the queue is full, new tasks trigger other pool mechanisms.

maximumPoolSize

When the bounded workQueue is full and the total thread count is below maximumPoolSize (e.g., 5), the pool creates additional non‑core threads to handle the overflow.

RejectedExecutionHandler

If both the workQueue is full and the pool has reached maximumPoolSize, further task submissions are rejected, causing a RejectedExecutionException . The pool provides several policies such as AbortPolicy, DiscardPolicy, DiscardOldestPolicy, CallerRunsPolicy, or a custom handler.

keepAliveTime

When the total number of threads exceeds corePoolSize, idle threads are terminated after keepAliveTime expires, reducing resource usage until only core threads remain.

Summary

The article gives an overview of ThreadPoolExecutor's core mechanisms; for a deeper understanding, readers are encouraged to study the source code in detail.

BackendJavaConcurrencythreadpoolExecutorThreadPoolExecutor
Java Captain
Written by

Java Captain

Focused on Java technologies: SSM, the Spring ecosystem, microservices, MySQL, MyCat, clustering, distributed systems, middleware, Linux, networking, multithreading; occasionally covers DevOps tools like Jenkins, Nexus, Docker, ELK; shares practical tech insights and is dedicated to full‑stack Java development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.