Backend Development 17 min read

Deep Dive into Spring Boot 2.7.10 Embedded Tomcat Configuration, Thread Management and Performance Testing

This article provides a comprehensive analysis of Spring Boot 2.7.10's embedded Tomcat 9.0.73 default settings, core parameters, internal thread architecture, configuration examples, performance testing with various concurrent connections, and includes detailed code snippets and diagrams, while also containing promotional material for AI services.

Top Architect
Top Architect
Top Architect
Deep Dive into Spring Boot 2.7.10 Embedded Tomcat Configuration, Thread Management and Performance Testing

The author, a senior architect, shares an in‑depth exploration of Spring Boot 2.7.10’s embedded Tomcat (version 9.0.73), covering default settings, key tuning parameters, internal thread design, and practical testing results.

Overview

In Spring Boot 2.7.10 the built‑in Tomcat defaults are:

Connection waiting queue length: 100

Maximum connections: 8192

Minimum worker threads: 10

Maximum worker threads: 200

Connection timeout: 20 seconds

Configuration Example

server:
  tomcat:
    # when all possible request‑handling threads are busy, the maximum queue length for incoming connections
    accept-count: 100
    # maximum number of connections the server can accept and process at any time
    max-connections: 8192
    threads:
      # minimum number of worker threads created at initialization
      min-spare: 10
      # maximum number of worker threads (IO‑intensive recommended 10×CPU, CPU‑intensive recommended CPU+1)
      max: 200
    connection-timeout: 20000
    keep-alive-timeout: 20000
    max-keep-alive-requests: 100

Core Parameters Explained

AcceptCount

Defines the total connection‑queue capacity (equivalent to the backlog parameter). It is compared with the system‑level somaxconn on Linux and has no counterpart on Windows.

serverSock = ServerSocketChannel.open();
socketProperties.setProperties(serverSock.socket());
InetSocketAddress addr = new InetSocketAddress(getAddress(), getPortWithOffset());
// bind with the configured accept‑count
serverSock.socket().bind(addr, getAcceptCount());

MaxConnections

Implemented in Acceptor.java . When the maximum is reached the acceptor thread blocks until a slot frees.

public void run() {
    while (!stopCalled) {
        // if we have reached the max connections, wait
        connectionLimitLatch.countUpOrAwait();
        // accept the next incoming connection
        socket = endpoint.serverSocketAccept();
        // ...
    }
}

MinSpareThread / MaxThread

Configured in AbstractEndpoint.createExecutor() and backed by a custom ThreadPoolExecutor that expands the queue after the minimum threads are exhausted.

public void createExecutor() {
    internalExecutor = true;
    TaskQueue taskqueue = new TaskQueue();
    TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
    executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS, taskqueue, tf);
    taskqueue.setParent((ThreadPoolExecutor) executor);
}

MaxKeepAliveRequests

Controls how many requests a persistent connection may serve before the server actively closes it. Setting it to 0 or 1 disables keep‑alive; -1 allows unlimited requests.

ConnectionTimeout & KeepAliveTimeout

Both default to 20 seconds. ConnectionTimeout terminates an idle connection after no request is received; KeepAliveTimeout applies after a request has been processed.

Internal Thread Model

Acceptor

Accepts socket connections, configures them via setSocketOptions() , and registers them with the poller.

public void run() {
    while (!stopCalled) {
        socket = endpoint.serverSocketAccept();
        endpoint.setSocketOptions(socket);
        poller.register(socketWrapper);
    }
}

Poller

Continuously polls the NIO selector, extracts ready keys and dispatches them to the executor thread pool.

public void run() {
    while (true) {
        Iterator
iterator = selector.selectedKeys().iterator();
        while (iterator.hasNext()) {
            SelectionKey sk = iterator.next();
            iterator.remove();
            NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment();
            if (socketWrapper != null) {
                processKey(sk, socketWrapper);
                executor.execute(new SocketProcessor(socketWrapper, SocketEvent.OPEN_READ));
            }
        }
    }
}

TomcatThreadPoolExecutor

A custom extension of java.util.concurrent.ThreadPoolExecutor that tracks submitted tasks and provides a forced‑offer mechanism for overload situations.

public class ThreadPoolExecutor extends java.util.concurrent.ThreadPoolExecutor {
    private final AtomicInteger submittedCount = new AtomicInteger(0);
    @Override
    protected void afterExecute(Runnable r, Throwable t) {
        if (!(t instanceof StopPooledThreadException)) {
            submittedCount.decrementAndGet();
        }
    }
    @Override
    public void execute(Runnable command) {
        submittedCount.incrementAndGet();
        try {
            super.execute(command);
        } catch (RejectedExecutionException rx) {
            // force task into queue if possible
            if (super.getQueue() instanceof TaskQueue) {
                TaskQueue queue = (TaskQueue) super.getQueue();
                try {
                    if (!queue.force(command, timeout, unit)) {
                        submittedCount.decrementAndGet();
                        throw new RejectedExecutionException("threadPoolExecutor.queueFull");
                    }
                } catch (InterruptedException x) {
                    submittedCount.decrementAndGet();
                    throw new RejectedExecutionException(x);
                }
            } else {
                submittedCount.decrementAndGet();
                throw rx;
            }
        }
    }
}

Testing Scenarios

Sample configuration used for testing:

server:
  port: 8080
  tomcat:
    accept-count: 3
    max-connections: 6
    threads:
      min-spare: 2
      max: 3

Using ss -nltp to view the full‑connection queue length and ss -ntp to inspect connection states. The tests show how the server behaves when the number of concurrent connections exceeds the configured limits (6, 9, 10, 11, 12 connections). When the limit is surpassed, new connections remain in SYN_RECV on the server side and in SYN_SENT on the client side, eventually timing out after the 20‑second handshake timeout.

Promotional Content

The article also contains promotional messages for a ChatGPT 4.0 domestic service, a private “AI Club” community, and various paid resources, encouraging readers to join, purchase, or contact the author for exclusive benefits.

References

https://www.zhangbj.com/p/1105.html

https://www.eginnovations.com/blog/tomcat-monitoring-metrics/

BackendperformanceTestingconfigurationSpring Bootthread poolTomcat
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.