Operations 6 min read

Understanding the Maximum Number of TCP Connections a Server Can Support

This article explains the true limits of concurrent TCP connections on a server, debunks common misconceptions about port numbers and TCP four‑tuple space, and shows how Linux file‑descriptor and socket buffer settings affect real‑world scalability, illustrated with a million‑connection experiment.

Refining Core Development Skills
Refining Core Development Skills
Refining Core Development Skills
Understanding the Maximum Number of TCP Connections a Server Can Support

Many developers are confused about how many network connections a single server can actually support. Some assume the limit is 65,535 because of the number of ports, while others claim the TCP four‑tuple space allows over two hundred trillion connections.

"The TCP four‑tuple consists of source IP, source port, destination IP, and destination port. Changing any element creates a completely different connection. Using Nginx on port 80 with a fixed IP, the only variables are source IP and source port, so theoretically the server can establish 2^32 (IP addresses) × 2^16 (ports) connections, which is more than two hundred trillion."

In Linux, each opened file (including sockets) consumes memory. To prevent a malicious process from exhausting resources, the OS imposes limits on the number of open file descriptors at three levels:

System‑wide limit: configurable via the fs.file-max kernel parameter.

User‑level limit: set in /etc/security/limits.conf .

Process‑level limit: adjustable with the fs.nr_open parameter.

The receive buffer size for TCP sockets can be inspected and tuned with sysctl :

$ sysctl -a | grep rmem
net.ipv4.tcp_rmem = 4096 87380 8388608
net.core.rmem_default = 212992
net.core.rmem_max = 8388608

The first value in tcp_rmem is the minimum memory allocated per socket (default 4 KB, maximum 8 MB). Similarly, the send buffer size is controlled by net.ipv4.tcp_wmem :

$ sysctl -a | grep wmem
net.ipv4.tcp_wmem = 4096 65536 8388608
net.core.wmem_default = 212992
net.core.wmem_max = 8388608

To demonstrate real‑world scalability, an experiment was performed to achieve one million established TCP connections. After raising the file‑descriptor limits to 1.1 million, the following command confirmed the count:

$ ss -n | grep ESTAB | wc -l
1000024

The host had 3.9 GB of RAM, with the kernel slab consuming about 3.2 GB, leaving only ~100 MB of free memory. Memory details were obtained via /proc/meminfo :

$ cat /proc/meminfo
MemTotal:        3922956 kB
MemFree:           96652 kB
MemAvailable:       6448 kB
Buffers:           44396 kB
... 
Slab:            3241244 kB

Using slabtop , the kernel objects sock_inode_cache and TCP each showed around one million entries, confirming that the kernel was handling the massive number of sockets.

Conclusion

High concurrency is a hallmark of backend services. Understanding the theoretical limits, Linux file‑descriptor constraints, and socket buffer configurations is essential for building systems that can approach the maximum number of TCP connections a server can sustain.

PerformanceConcurrencyTCPLinuxServersysctlfile descriptors
Refining Core Development Skills
Written by

Refining Core Development Skills

Fei has over 10 years of development experience at Tencent and Sogou. Through this account, he shares his deep insights on performance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.