Custom Rejection Policies and Blocking Strategies for Java ThreadPoolExecutor
The article explains how to design a producer‑consumer model using Java's BlockingQueue, customize ThreadPoolExecutor's rejection policies—especially replacing the default AbortPolicy with a blocking strategy via a custom RejectedExecutionHandler—to safely handle full queues and improve concurrency control.
When using thread pools, the default discard policy is often applied, but developers can define custom strategies to better manage producer‑consumer workloads.
In concurrent programming, the producer‑consumer pattern is common; the production rate usually exceeds consumption, making queue length and speed matching critical. Using J.U.C. queues (e.g., ArrayBlockingQueue , LinkedBlockingQueue ) ensures thread‑safe production and consumption, but the initial capacity must be set to avoid OutOfMemory errors.
If the queue becomes full, tasks should not be dropped. Instead, the producer can block until space is available, which is naturally supported by BlockingQueue . Likewise, when the queue is empty, consumers can block using take() (or its timed variant) to wait for new tasks, preventing endless waiting when production stops.
ThreadPoolExecutor already incorporates a BlockingQueue and consumer logic, offering advantages such as dynamic thread count adjustment. However, its execute() method calls the non‑blocking offer() on the queue, so a full queue does not block the producer.
The executor uses a RejectedExecutionHandler when the queue is full; the default AbortPolicy throws a RejectedExecutionException . Among the alternatives, CallerRunsPolicy runs the task in the submitting thread, which pauses submission but does not block the producer.
For a true blocking behavior, a custom RejectedExecutionHandler can be implemented to call BlockingQueue.put() when the queue is full, causing the producer to wait until space is freed. This simplifies the design by focusing only on producer and consumer threads while the executor handles the rest.
The approach reduces code size, avoids many concurrency pitfalls, and can be combined with other techniques such as semaphore‑based entry control if needed.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.