Prevent Data Loss in Java Thread Pools When Services Crash

This article explains Java thread pools, their advantages, internal mechanics, common pitfalls such as oversized queues, excessive threads, and data loss on crashes, and presents a persistence‑based solution using database‑stored tasks and scheduled retries to ensure no data is lost when services go down.

Su San Talks Tech
Su San Talks Tech
Su San Talks Tech
Prevent Data Loss in Java Thread Pools When Services Crash

1. What Is a Thread Pool?

Before thread pools, threads were created by extending Thread or implementing Runnable, which caused high creation/destruction cost, memory exhaustion, and inability to reuse threads.

Java introduced thread pools to reuse threads, reducing resource consumption, improving response speed, and enhancing manageability.

2. Thread Pool Mechanics

public ThreadPoolExecutor(int corePoolSize,
    int maximumPoolSize,
    long keepAliveTime,
    TimeUnit unit,
    BlockingQueue<Runnable> workQueue,
    ThreadFactory threadFactory,
    RejectedExecutionHandler handler)

corePoolSize: minimum number of threads.

maximumPoolSize: maximum threads.

keepAliveTime: idle thread lifetime beyond core size.

unit: time unit.

workQueue: task queue.

threadFactory: creates new threads.

handler: rejection policy.

Thread pool workflow diagram:

Thread pool flowchart
Thread pool flowchart

Processing steps: initialization, task submission, task handling, thread expansion, thread reclamation, and rejection handling.

3. Common Thread Pool Issues

3.1 Oversized Queue

Executors.newFixedThreadPool

uses LinkedBlockingQueue with Integer.MAX_VALUE capacity, which may cause OOM.

public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
    return new ThreadPoolExecutor(nThreads, nThreads, 0L,
        TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(),
        threadFactory);
}

3.2 Too Many Threads

Executors.newCachedThreadPool

can create up to Integer.MAX_VALUE threads, also risking OOM.

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
        60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
}

3.3 Data Loss on Crash

If the service restarts while the pool processes tasks, in‑memory data is lost.

Custom thread pool with bounded queue can mitigate OOM:

new ThreadPoolExecutor(8, 10, 30L,
    TimeUnit.MILLISECONDS,
    new ArrayBlockingQueue<Runnable>(300),
    threadFactory);

4. Ensuring No Data Loss

Persist tasks to a database before submitting to the pool, and use a scheduled job to poll pending tasks and re‑submit them.

Original system flow
Original system flow

Process business logic 1 and write a task record (status “pending”) in the same transaction. A scheduler periodically fetches pending tasks, submits them to the pool, and updates status to “completed” after success.

Optimized system flow
Optimized system flow

Include a failure‑count field to limit retries; when retries exceed a threshold, mark the task as “failed” for manual handling.

Business logic 2 must be idempotent so that repeated execution does not affect results.
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

JavaBackend DevelopmentConcurrencyThread PoolData PersistenceOOM prevention
Su San Talks Tech
Written by

Su San Talks Tech

Su San, former staff at several leading tech companies, is a top creator on Juejin and a premium creator on CSDN, and runs the free coding practice site www.susan.net.cn.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.