Understanding Tomcat Configuration in Spring Boot 2.7.10: Parameters, Thread Pools, and Connection Limits
This article explains the default Tomcat settings in Spring Boot 2.7.10, detailing connection queues, thread pool parameters, timeout configurations, and internal threading architecture, and provides practical examples and testing results for various concurrent connection scenarios.
Each Spring Boot version bundles a specific Tomcat version; this article uses Spring Boot 2.7.10 with its embedded Tomcat 9.0.73 as an example.
Overview
In Spring Boot 2.7.10 the default Tomcat settings are:
Connection waiting queue length (acceptCount): 100
Maximum connections (maxConnections): 8192
Minimum worker threads (minSpareThreads): 10
Maximum worker threads (maxThreads): 200
Connection timeout: 20 seconds
Relevant configuration keys and their default values are shown below:
server:
tomcat:
# Maximum queue length when all request handling threads are busy
accept-count:
100
# Maximum number of connections the server can accept at any time
max-connections:
8192
threads:
# Minimum number of worker threads created at startup
min-spare:
10
# Maximum number of worker threads (IO‑intensive workloads usually use 10×CPU cores)
max:
200
# Time to wait for the request line after a connection is accepted
connection-timeout:
20000
# Time to wait for another HTTP request before closing a keep‑alive connection
keep-alive-timeout:
20000
# Maximum number of HTTP requests that can be pipelined on a keep‑alive connection
max-keep-alive-requests:
100Architecture Diagram
When the number of active connections exceeds maxConnections + acceptCount + 1 , new requests will not receive an immediate refusal; instead the TCP three‑way handshake will not complete, and after the client timeout (or Tomcat's 20 s) the request will time out.
TCP Three‑Way Handshake and Four‑Way Close
Sequence Diagram
Key Parameters
AcceptCount
The size of the connection backlog queue, equivalent to the backlog parameter in Linux (the smaller of somaxconn and the configured value). Windows does not have a system‑level counterpart.
serverSock = ServerSocketChannel.open();
socketProperties.setProperties(serverSock.socket());
InetSocketAddress addr = new InetSocketAddress(getAddress(), getPortWithOffset());
// bind with the configured acceptCount as backlog
serverSock.socket().bind(addr, getAcceptCount());MaxConnections
// In the Acceptor thread run method
while (!stopCalled) {
// If we have reached the max connections, wait
connectionLimitLatch.countUpOrAwait();
// Accept the next incoming connection from the server socket
socket = endpoint.serverSocketAccept();
// socket.close will release and call connectionLimitLatch.countDown();
}MinSpareThread / MaxThread
public void createExecutor() {
internalExecutor = true;
TaskQueue taskqueue = new TaskQueue();
TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
// Tomcat‑enhanced thread pool
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS, taskqueue, tf);
taskqueue.setParent((ThreadPoolExecutor) executor);
}Key point: Tomcat extends the standard JDK thread pool to change the order of task handling from minThreads → queue → maxThreads → Exception (standard JDK) to minThreads → maxThreads → queue → Exception .
MaxKeepAliveRequests
After the configured number of keep‑alive requests, the server will actively close the connection. Setting this value to 0 or 1 disables keep‑alive and pipelining; setting it to -1 allows unlimited keep‑alive requests.
NioEndpoint.setSocketOptions
socketWrapper.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
Http11Processor.service(SocketWrapperBase
socketWrapper) {
keepAlive = true;
while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
sendfileState == SendfileState.DONE && !protocol.isPaused()) {
// default 100
int maxKeepAliveRequests = protocol.getMaxKeepAliveRequests();
if (maxKeepAliveRequests == 1) {
keepAlive = false;
} else if (maxKeepAliveRequests > 0 && socketWrapper.decrementKeepAlive() <= 0) {
keepAlive = false;
}
}
}ConnectionTimeout
The lifetime of an established connection; if no request arrives within connectionTimeout (default 20 000 ms in Tomcat 9), the server closes the connection.
KeepAliveTimeout
Time to wait for another HTTP request before closing a keep‑alive connection. If not set, connectionTimeout is used; a value of -1 disables the timeout.
Internal Threads
Acceptor
The Acceptor receives socket connections, wraps them into NioSocketWrapper , and registers them with the Poller.
public void run() {
while (!stopCalled) {
// Wait for the next request
socket = endpoint.serverSocketAccept();
// Register socket with Poller
endpoint.setSocketOptions(socket);
// Add to Poller event queue
poller.register(socketWrapper);
}
}Poller
The Poller checks the NIO selector for ready events and dispatches them to the Executor thread pool.
public void run() {
while (true) {
Iterator
iterator = keyCount > 0 ? selector.selectedKeys().iterator() : null;
while (iterator != null && iterator.hasNext()) {
SelectionKey sk = iterator.next();
iterator.remove();
NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment();
if (socketWrapper != null) {
processKey(sk, socketWrapper);
executor.execute(new SocketProcessor(socketWrapper, SocketEvent.OPEN_READ/SocketEvent.OPEN_WRITE));
}
}
}
}TomcatThreadPoolExecutor
Tomcat’s custom thread pool extends java.util.concurrent.ThreadPoolExecutor with a more efficient getSubmittedCount() and a custom TaskQueue that can force tasks into the queue when the pool is saturated.
public class ThreadPoolExecutor extends java.util.concurrent.ThreadPoolExecutor {
private final AtomicInteger submittedCount = new AtomicInteger(0);
@Override
protected void afterExecute(Runnable r, Throwable t) {
if (!(t instanceof StopPooledThreadException)) {
submittedCount.decrementAndGet();
}
}
@Override
public void execute(Runnable command) {
submittedCount.incrementAndGet();
try {
super.execute(command);
} catch (RejectedExecutionException rx) {
if (super.getQueue() instanceof TaskQueue) {
TaskQueue queue = (TaskQueue) super.getQueue();
try {
if (!queue.force(command, timeout, unit)) {
submittedCount.decrementAndGet();
throw new RejectedExecutionException("threadPoolExecutor.queueFull");
}
} catch (InterruptedException x) {
submittedCount.decrementAndGet();
throw new RejectedExecutionException(x);
}
} else {
submittedCount.decrementAndGet();
throw rx;
}
}
}
// TaskQueue implementation omitted for brevity
}Testing Scenarios
Example configuration used for testing:
server:
port: 8080
tomcat:
accept-count: 3
max-connections: 6
threads:
min-spare: 2
max: 3Using ss -nltp to view the connection queue length and capacity, the following observations were made:
With 6 concurrent connections the system behaved as expected.
At 9, 10, and 11 concurrent connections the server started to leave connections in SYN_RECV state, indicating the accept queue was full.
Clients then remained in SYN_SENT state until the 20 s kernel timeout expired.
If the client sets its own timeout, the smaller of the client timeout and the server’s three‑way‑handshake timeout will determine when the request fails.
References
https://www.zhangbj.com/p/1105.html
https://www.eginnovations.com/blog/tomcat-monitoring-metrics
For more technical sharing, follow the public account "Architecture Engineer (JiaGouX)" and join the discussion group.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.