Operations 7 min read

How Many TCP Connections Can One Server Really Handle? A Deep Dive

This article demystifies the common confusion about a server’s maximum concurrent TCP connections, explaining the theoretical limits of the TCP four‑tuple, Linux file‑descriptor restrictions, kernel buffer settings, and demonstrates achieving one million active connections through careful configuration and tuning.

Efficient Ops
Efficient Ops
Efficient Ops
How Many TCP Connections Can One Server Really Handle? A Deep Dive

Why many people are confused about concurrency

In network development many developers still don’t fully understand a basic question: how many network connections can a single server support at most? This article explains the answer.

A chat about server‑side concurrency

“The TCP connection four‑tuple consists of source IP, source port, destination IP and destination port. Changing any element creates a different connection. For my Nginx, the port (80) and IP are fixed, so only source IP and source port vary. Theoretically Nginx can establish 2^32 (IP count) × 2^16 (port count) connections, a number in the hundreds of trillions.”
“Each opened file (in Linux everything is a file, including sockets) consumes memory. If a malicious process opens unlimited files, the server will crash. Therefore Linux limits the number of open file descriptors at system, user and process levels.”

System level: maximum number of open files, configurable via

fs.file-max

User level: maximum per‑user open files, configurable in

/etc/security/limits.conf

Process level: maximum per‑process open files, configurable via

fs.nr_open
“The receive buffer size can be viewed and configured with the sysctl command.”
<code>$ sysctl -a | grep rmem
net.ipv4.tcp_rmem = 4096 87380 8388608
net.core.rmem_default = 212992
net.core.rmem_max = 8388608</code>
“In tcp_rmem the first value is the minimum bytes allocated for a TCP connection’s receive buffer, default 4 KB, maximum up to 8 MB.”
“The send buffer size is controlled by net.ipv4.tcp_wmem .”
<code>$ sysctl -a | grep wmem
net.ipv4.tcp_wmem = 4096 65536 8388608
net.core.wmem_default = 212992
net.core.wmem_max = 8388608</code>
“In tcp_wmem the first value is the minimum send buffer size, also 4 KB by default.”

Achieving a million connections on the server

“To run the experiment we raised the limits at system, user and process levels to 1.1 million (target 1 million). This ensures other commands like ps and vi remain usable.”
<code>$ ss -n | grep ESTAB | wc -l
1000024</code>

The machine has 3.9 GB total memory, with the kernel slab using about 3.2 GB. Free memory and buffers together are only about 100 MB:

<code>$ cat /proc/meminfo
MemTotal:        3922956 kB
MemFree:           96652 kB
MemAvailable:       6448 kB
Buffers:           44396 kB
...
Slab:          3241244KB kB</code>

Using

slabtop

we see that the kernel objects

densty

,

flip

,

sock_inode_cache

and

TCP

each have around one million instances.

Conclusion

High concurrency is a key characteristic of internet backend services. After reading this article you should have a clear understanding of how many TCP connections a single server can support and how to tune the system to reach that limit.

ConcurrencyTCPLinuxsysctlserver tuning
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.