Backend Development 13 min read

Batch Request Merging in Spring Boot to Reduce Database Connection Overhead

The article explains how to merge multiple user queries into a single batch request using a queue, scheduled thread pool and CompletableFuture in Spring Boot, thereby reducing database connections and improving resource utilization while handling high‑concurrency scenarios.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Batch Request Merging in Spring Boot to Reduce Database Connection Overhead

This article addresses the problem of multiple users (e.g., three users with IDs 1, 2, 3) each sending separate queries to a database, which wastes valuable connection resources, and proposes merging those requests into a single SQL statement.

Technical means include a LinkedBlockingQueue for request buffering, a ScheduledThreadPoolExecutor for periodic batch processing, and CompletableFuture (Java 8) to deliver results back to the original callers.

Code implementation – Service interface and batch query

public interface UserService {
    Map
queryUserByIdBatch(List
userReqs);
}

@Service
public class UserServiceImpl implements UserService {
    @Resource
    private UsersMapper usersMapper;

    @Override
    public Map
queryUserByIdBatch(List
userReqs) {
        List
userIds = userReqs.stream()
            .map(UserWrapBatchService.Request::getUserId)
            .collect(Collectors.toList());
        QueryWrapper
queryWrapper = new QueryWrapper<>();
        queryWrapper.in("id", userIds); // combine into one SQL IN clause
        List
users = usersMapper.selectList(queryWrapper);
        Map
> userGroup = users.stream()
            .collect(Collectors.groupingBy(Users::getId));
        HashMap
result = new HashMap<>();
        userReqs.forEach(val -> {
            List
usersList = userGroup.get(val.getUserId());
            if (!CollectionUtils.isEmpty(usersList)) {
                result.put(val.getRequestId(), usersList.get(0));
            } else {
                result.put(val.getRequestId(), null);
            }
        });
        return result;
    }
}

Batch request merging service

package com.springboot.sample.service.impl;

import com.springboot.sample.bean.Users;
import com.springboot.sample.service.UserService;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import java.util.*;
import java.util.concurrent.*;

@Service
public class UserWrapBatchService {
    @Resource
    private UserService userService;
    public static int MAX_TASK_NUM = 100;

    public class Request {
        String requestId;
        Long userId;
        CompletableFuture
completableFuture;
        // getters and setters omitted for brevity
    }

    private final Queue
queue = new LinkedBlockingQueue();

    @PostConstruct
    public void init() {
        ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
        scheduledExecutorService.scheduleAtFixedRate(() -> {
            int size = queue.size();
            if (size == 0) return;
            List
list = new ArrayList<>();
            System.out.println("合并了 [" + size + "] 个请求");
            for (int i = 0; i < size; i++) {
                if (i < MAX_TASK_NUM) {
                    list.add(queue.poll());
                }
            }
            List
userReqs = new ArrayList<>();
            for (Request request : list) {
                userReqs.add(request);
            }
            Map
response = userService.queryUserByIdBatch(userReqs);
            for (Request request : list) {
                Users result = response.get(request.requestId);
                request.completableFuture.complete(result);
            }
        }, 100, 10, TimeUnit.MILLISECONDS);
    }

    public Users queryUser(Long userId) {
        Request request = new Request();
        request.requestId = UUID.randomUUID().toString().replace("-", "");
        request.userId = userId;
        CompletableFuture
future = new CompletableFuture<>();
        request.completableFuture = future;
        queue.offer(request);
        try {
            return future.get();
        } catch (InterruptedException | ExecutionException e) {
            e.printStackTrace();
        }
        return null;
    }
}

Controller layer calling the batch service

@RequestMapping("/merge")
public Callable
merge(Long userId) {
    return new Callable
() {
        @Override
        public Users call() throws Exception {
            return userBatchService.queryUser(userId);
        }
    };
}

A high‑concurrency test is provided using CountDownLatch and RestTemplate to launch 30 threads, each sending three random requests to the /merge endpoint, demonstrating how the batch mechanism reduces the number of actual database calls.

public class TestBatch {
    private static int threadCount = 30;
    private static final CountDownLatch COUNT_DOWN_LATCH = new CountDownLatch(threadCount);
    private static final RestTemplate restTemplate = new RestTemplate();
    public static void main(String[] args) {
        for (int i = 0; i < threadCount; i++) {
            new Thread(() -> {
                COUNT_DOWN_LATCH.countDown();
                try { COUNT_DOWN_LATCH.await(); } catch (InterruptedException e) { e.printStackTrace(); }
                for (int j = 1; j <= 3; j++) {
                    int param = new Random().nextInt(4);
                    if (param <= 0) param++;
                    String responseBody = restTemplate.getForObject(
                        "http://localhost:8080/asyncAndMerge/merge?userId=" + param,
                        String.class);
                    System.out.println(Thread.currentThread().getName() + " 参数 " + param + " 返回值 " + responseBody);
                }
            }).start();
        }
    }
}

The article also points out two important issues: Java 8's CompletableFuture lacks a built‑in timeout, and the generated SQL has a length limit, so a maximum batch size ( MAX_TASK_NUM ) is enforced.

To solve the timeout problem, a variant using a per‑request LinkedBlockingQueue is presented. Each request carries its own queue; the batch processor puts the result into the queue, and the caller polls the queue with a timeout (e.g., 3 seconds).

public Users queryUser(Long userId) {
    Request request = new Request();
    request.requestId = UUID.randomUUID().toString().replace("-", "");
    request.userId = userId;
    LinkedBlockingQueue
usersQueue = new LinkedBlockingQueue<>();
    request.usersQueue = usersQueue;
    queue.offer(request);
    try {
        return usersQueue.poll(3000, TimeUnit.MILLISECONDS);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return null;
}

Conclusion

Batch request merging can dramatically reduce the number of connections to a database or remote service, making it suitable for high‑concurrency environments, but it introduces a small waiting latency before the actual business logic runs, so it is not ideal for low‑traffic scenarios.

JavaConcurrencybatch processingCompletableFutureSpring Bootrequest merging
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.