Backend Development 15 min read

Understanding Thread Pool Implementations in JDK, Jetty, and Tomcat

This article analyzes the core principles and components of thread pools, explains why they are essential for performance and flow control, and compares the concrete implementations in JDK, Jetty 6, Jetty 8, and Tomcat, providing guidance for developers who wish to use or customize thread pools.

Art of Distributed System Architecture Design
Art of Distributed System Architecture Design
Art of Distributed System Architecture Design
Understanding Thread Pool Implementations in JDK, Jetty, and Tomcat

1. Introduction

Before reading the source code of thread pools, many feel that thread pools are the most sophisticated part of a framework. After study, it becomes clear that thread pool implementations are remarkably clever. This article analyzes the essential principles and components of thread pools from a technical perspective and examines the source code of JDK, Jetty 6, Jetty 8, and Tomcat, offering guidance for understanding, better using, or customizing thread pools for specific business scenarios.

2. Significance of Using Thread Pools

Reuse: In systems like web servers that handle many short‑lived requests, creating and destroying threads for each request becomes a performance bottleneck. A thread pool allows worker threads to be reused for multiple tasks, reducing creation overhead and improving overall performance.

Flow Control: Limited server resources can be overwhelmed by excessive concurrency, leading to high CPU usage, context‑switching, and memory exhaustion. Thread pools limit the maximum concurrency and task queue size, providing effective flow control and preventing crashes.

Functionality: The JDK thread pool implementation is highly flexible and offers many features, making it suitable for a variety of scenarios.

3. Key Points of Thread Pool Technology

The technology can be divided into six main aspects:

Worker Thread (worker): A reusable thread that processes multiple jobs during its lifecycle. Controlling the number of workers implements both reuse and flow control.

Job Queue: Stores pending jobs when all workers are busy. Different queue implementations enable priority handling, blocking behavior, bounded or unbounded capacity, etc.

Thread‑Pool Initialization: Determines how many workers are created at startup or on demand.

Job‑Processing Algorithm: Decides whether to execute a job immediately, create a new worker, or enqueue it.

Worker Scaling Algorithm: Adjusts the number of workers based on workload, using metrics such as pending jobs, core/max pool size, and idle time.

Termination Logic: When the application stops, the pool must ensure all jobs are completed or properly discarded.

4. Implementation Details of Various Thread Pools

Based on the above points, the following implementations are compared.

Worker and Queue Implementations: Implementation Worker Structure & Concurrency Protection Job Queue Structure JDK Uses a HashSet to store workers, protected by a ReentrantLock . Each worker implements Runnable . Uses a BlockingQueue (e.g., SynchronousQueue , ArrayBlockingQueue , LinkedBlockingQueue , PriorityBlockingQueue ) supplied via the constructor. Jetty 6 Also uses a HashSet protected by a synchronized block; workers extend Thread . Stores jobs in an array; the array expands when needed, effectively providing an unbounded queue. Jetty 8 Uses ConcurrentLinkedQueue for workers with an AtomicInteger to track size. Same as JDK, using a BlockingQueue . If no queue is configured, defaults to ArrayBlockingQueue or Jetty’s custom BlockingArrayQueue . Tomcat Relies on JDK’s ThreadPoolExecutor implementation. Reuses JDK’s queue mechanisms.

Thread‑Pool Construction and Job‑Processing Algorithms: Implementation Construction & Worker Initialization Job‑Processing Algorithm JDK Multiple constructor parameters allow flexible initialization: corePoolSize , maximumPoolSize , keepAliveTime , workQueue . Workers are not started until tasks arrive, though they can be pre‑started. 1) If workers < corePoolSize, create a worker to handle the job. 2) If workers ≥ corePoolSize, enqueue the job. 3) If enqueue fails and workers < maximumPoolSize, create a new worker. 4) Otherwise reject the task. Jetty 6 Parameters: _spawnOrShrinkAt , _minThreads , _maxThreads , _maxIdleTimeMs . Initializes and starts _minThreads workers immediately. 1) Dispatch job to an idle worker if available. 2) Otherwise store job in the array. 3) If pending jobs exceed the expansion threshold and workers < max, add workers. 4) Otherwise do nothing. Jetty 8 Similar parameters to Jetty 6 but without _spawnOrShrinkAt . Starts _minThreads workers at startup. Simply enqueues the job; workers pull from the queue. Tomcat Uses JDK’s constructor; workers are started on demand. Extends JDK’s algorithm with additional statistics such as submittedCount .

Worker Scaling Mechanisms: Implementation Worker Increase Algorithm Worker Decrease Algorithm JDK 1) If pending jobs arrive and workers < corePoolSize, create a worker. 2) If workers between core and max and queue insertion fails, create a worker. 3) On explicit core size increase, add workers. 1) If the queue is empty and workers > corePoolSize, reduce workers. 2) If the queue is empty and allowCoreThreadTimeOut is true, keep at least one worker. Jetty 6 1) Start _minThreads workers at pool creation. 2) If pending jobs exceed the threshold and workers < max, add workers. 3) Adjust workers when setMinThreads is called. Reduce workers when all three conditions hold: no pending jobs, workers > _minThreads , and idle threads exceed the threshold. Jetty 8 1) Start _minThreads workers at startup. 2) Add workers when there are no idle workers or idle count < pending jobs. 3) Adjust on setMinThreads . Reduce workers when: queue empty, total workers > _minThreads , and idle time exceeds the timeout. Tomcat Same as JDK increase algorithm. Uses JDK’s reduction logic with additional delayed‑termination parameters that throw an exception to stop workers when limits are exceeded.

5. Summary

Comparing the implementations, JDK’s thread pool is the most flexible, feature‑rich, and extensible. Tomcat builds directly on JDK’s pool with extra functionality. Jetty 6 implements a fully custom pool with complex coupling and poor extensibility, while Jetty 8 simplifies the design by leveraging JDK’s concurrent containers and atomic variables, moving closer to the JDK approach.

6. Reference Source Code

JDK: java.util.concurrent.ThreadPoolExecutor

Jetty 6: org.mortbay.thread.QueuedThreadPool

Jetty 8: org.eclipse.jetty.util.thread.QueuedThreadPool

Tomcat: org.apache.tomcat.util.threads.ThreadPoolExecutor

JavaConcurrencyJDKthread poolTomcatJetty
Art of Distributed System Architecture Design
Written by

Art of Distributed System Architecture Design

Introductions to large-scale distributed system architectures; insights and knowledge sharing on large-scale internet system architecture; front-end web architecture overviews; practical tips and experiences with PHP, JavaScript, Erlang, C/C++ and other languages in large-scale internet system development.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.