Backend Development 14 min read

Common Java Concurrency Patterns: Singleton, Future, Producer‑Consumer, Master‑Worker, and ForkJoin

This article explains several classic Java concurrency patterns—including various Singleton implementations, the Future and FutureTask mechanisms, producer‑consumer queues, Master‑Worker coordination, and the ForkJoin framework—providing code examples and practical usage notes for each.

Top Architect
Top Architect
Top Architect
Common Java Concurrency Patterns: Singleton, Future, Producer‑Consumer, Master‑Worker, and ForkJoin

This article introduces a collection of frequently used Java concurrency patterns, showing their purpose, typical use‑cases, and complete source code examples.

Singleton

The Singleton pattern is used for global object management. Two main categories are lazy and eager initialization.

Lazy (synchronized method)

public static synchronized Singleton getInstance() {
    if (single == null) {
        single = new Singleton();
    }
    return single;
}

This approach acquires a lock on every call and is therefore not recommended for high‑performance scenarios.

Lazy (double‑checked locking + volatile)

private volatile Singleton singleton = null;
public static Singleton getInstance() {
    if (singleton == null) {
        synchronized (Singleton.class) {
            if (singleton == null) {
                singleton = new Singleton();
            }
        }
    }
    return singleton;
}

This optimizes the previous version by locking only during the first initialization.

Lazy (static inner class)

public class Singleton {
    private static class LazyHolder {
        private static final Singleton INSTANCE = new Singleton();
    }
    private Singleton() {}
    public static final Singleton getInstance() {
        return LazyHolder.INSTANCE;
    }
}

The static inner‑class technique solves both synchronization and verbosity problems and is the recommended lazy implementation.

Eager (hungry) Singleton

public class Singleton1 {
    private static final Singleton1 single = new Singleton1();
    private Singleton1() {}
    public static Singleton1 getInstance() {
        return single;
    }
}

The instance is created at class loading time, which may be unnecessary if the object is never used.

Future Pattern

The Future pattern enables asynchronous method calls, similar to Ajax requests. The caller receives a placeholder object immediately, while the actual computation runs in another thread.

JDK provides built‑in support via the Future interface. Typical usage involves FutureTask :

public class FutureDemo1 {
    public static void main(String[] args) throws InterruptedException, ExecutionException {
        FutureTask
future = new FutureTask<>(new Callable
() {
            @Override
            public String call() throws Exception {
                return new RealData().costTime();
            }
        });
        ExecutorService service = Executors.newCachedThreadPool();
        service.submit(future);
        System.out.println("RealData method called");
        doOtherThing();
        System.out.println(future.get());
    }
    private static void doOtherThing() throws InterruptedException {
        Thread.sleep(2000L);
    }
}

class RealData {
    public String costTime() {
        try {
            Thread.sleep(1000L);
            return "result";
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return "exception";
    }
}

The same can be written directly with FutureTask or by submitting a Callable to an ExecutorService , which returns a Future object. The Future API also provides cancel , isCancelled , isDone , and timed get methods.

Producer‑Consumer Pattern

The producer‑consumer model decouples data generation from processing using a shared buffer, typically a BlockingQueue . Producers place PCData objects into the queue; consumers take them out and perform calculations.

while (isRunning) {
    Thread.sleep(r.nextInt(SLEEP_TIME));
    data = new PCData(count.incrementAndGet());
    System.out.println(data + " is put into queue");
    if (!queue.offer(data, 2, TimeUnit.SECONDS)) {
        System.out.println("failed to put data : " + data);
    }
}

while (true) {
    PCData data = queue.take();
    if (data != null) {
        int re = data.getData() * 10;
        System.out.println("after cal, value is : " + re);
        Thread.sleep(r.nextInt(SLEEP_TIME));
    }
}

Using a BlockingQueue ensures thread‑safe hand‑off; for higher performance, a lock‑free ConcurrentLinkedQueue can be used.

Master‑Worker Pattern

In the Master‑Worker model, a Master thread distributes tasks to multiple Worker threads, collects their results, and aggregates them. The example shows a Master that holds a ConcurrentLinkedQueue<TaskDemo> of tasks and a map of partial results.

public class MasterDemo {
    private ConcurrentLinkedQueue
workQueue = new ConcurrentLinkedQueue<>();
    private HashMap
workers = new HashMap<>();
    private ConcurrentHashMap
resultMap = new ConcurrentHashMap<>();
    // constructor, submit, execute, isComplete, getResult methods ...
}

public class WorkerDemo implements Runnable {
    private ConcurrentLinkedQueue
workQueue;
    private ConcurrentHashMap
resultMap;
    @Override
    public void run() {
        while (true) {
            TaskDemo input = workQueue.poll();
            if (input == null) break;
            int result = input.getPrice();
            resultMap.put(input.getId() + "", result);
            System.out.println("Task finished, thread: " + Thread.currentThread().getName());
        }
    }
    // getters and setters ...
}

The main method creates a Master with ten Workers, submits 100 tasks, starts execution, and finally prints the aggregated sum.

ForkJoin Thread Pool

Introduced in JDK 7, the ForkJoin framework recursively splits a large computation into smaller subtasks. The example computes the sum of numbers from 0 to 20 000 000 using a CountTask that extends RecursiveTask .

public class CountTask extends RecursiveTask
{
    private static final int THRESHOLD = 10000;
    private long start, end;
    public CountTask(long start, long end) { this.start = start; this.end = end; }
    @Override
    protected Long compute() {
        long sum = 0;
        boolean canCompute = (end - start) < THRESHOLD;
        if (canCompute) {
            for (long i = start; i <= end; i++) sum += i;
        } else {
            long step = (start + end) / 100;
            ArrayList
subTasks = new ArrayList<>();
            long pos = start;
            for (int i = 0; i < 100; i++) {
                long lastOne = Math.min(pos + step, end);
                CountTask sub = new CountTask(pos, lastOne);
                pos = lastOne + 1;
                subTasks.add(sub);
                sub.fork();
            }
            for (CountTask t : subTasks) sum += t.join();
        }
        return sum;
    }
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        ForkJoinPool pool = new ForkJoinPool();
        CountTask task = new CountTask(0, 20000000L);
        ForkJoinTask
result = pool.submit(task);
        System.out.println("sum result : " + result.get());
    }
}

The ForkJoin pool manages a lock‑free work‑stealing queue; idle threads are parked and later unparked when new tasks become available.

Overall, the article provides a practical reference for implementing common concurrency designs in Java backend applications.

backendDesign PatternsJavaConcurrencyMultithreadingproducer-consumerFutureForkJoin
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.