Operations 6 min read

Understanding How Many Concurrent TCP Connections a Server Can Actually Support

This article explains the true limits of concurrent TCP connections on a server, debunks common misconceptions about port numbers, details the TCP four‑tuple theory, outlines Linux file‑descriptor restrictions, shows how to tune kernel buffers with sysctl, and shares a real‑world test achieving one million active connections.

Architecture Digest
Architecture Digest
Architecture Digest
Understanding How Many Concurrent TCP Connections a Server Can Actually Support

In network development many developers are confused about the maximum number of simultaneous network connections a single server can handle. The common belief that the limit is 65,535 (the number of ports) or the theoretical 2 × 10^12 connections derived from the TCP four‑tuple is examined and clarified.

The TCP connection is identified by a four‑tuple: source IP, source port, destination IP, and destination port. With a fixed server IP and port (e.g., Nginx on port 80), only the source IP and source port vary, giving a theoretical maximum of 2^32 × 2^16 ≈ 2.1 × 10^12 connections.

However, each opened file (including sockets) consumes kernel memory. Linux imposes limits on the number of open file descriptors at three levels:

System‑wide limit (fs.file‑max)

User‑level limit (configured in /etc/security/limits.conf )

Process‑level limit (fs.nr_open)

These limits must be raised to allow a very large number of connections.

The size of the TCP receive and send buffers can be inspected and tuned with sysctl :

$ sysctl -a | grep rmem
net.ipv4.tcp_rmem = 4096 87380 8388608
net.core.rmem_default = 212992
net.core.rmem_max = 8388608

The first value in tcp_rmem is the minimum receive buffer per socket (default 4 KB, max 8 MB). Similarly, the send buffer is controlled by net.ipv4.tcp_wmem :

$ sysctl -a | grep wmem
net.ipv4.tcp_wmem = 4096 65536 8388608
net.core.wmem_default = 212992
net.core.wmem_max = 8388608

After adjusting these parameters and increasing the file‑descriptor limits, the author performed a test targeting 1,000,000 concurrent connections. The command $ ss -n | grep ESTAB | wc -l reported 1,000,024 established connections.

Memory usage showed that out of 3.9 GB total RAM, the kernel slab consumed about 3.2 GB, leaving only ~100 MB free, indicating the heavy cost of maintaining a large number of socket structures.

Tools like slabtop revealed that kernel objects such as sock_inode_cache and TCP each held roughly one million entries.

In conclusion, while the theoretical limit of TCP connections is extremely high, practical limits are governed by Linux file‑descriptor caps, kernel memory consumption, and socket buffer settings. Proper tuning of these parameters enables a server to handle hundreds of thousands to millions of concurrent connections.

ConcurrencyTCPLinuxserver performancesysctlfile descriptors
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.