How Many Files and TCP Connections Can a Linux Server Actually Handle?
This article explains the Linux kernel parameters that limit the number of open files and TCP connections, shows how to adjust them, estimates realistic connection counts based on memory, and discusses client‑side port constraints and related practical issues.
In Linux, everything is treated as a file, so the maximum number of open files (including sockets) on a server is governed by three kernel parameters: fs.file-max (system‑wide limit), soft nofile (per‑process soft limit), and fs.nr_open (hard limit for processes). These values are inter‑dependent: increasing soft nofile requires a matching increase in hard nofile, and raising hard nofile also demands a larger fs.nr_open. Modifying fs.nr_open via echo is discouraged because the change is lost after a reboot.
Adjusting the Maximum Open Files
To allow a process to open 1,000,000 file descriptors, you can edit /etc/sysctl.conf:
fs.file-max=1100000
fs.nr_open=1100000Apply the changes with sysctl -p. Then set per‑user limits in /etc/security/limits.conf:
soft nofile 1000000
hard nofile 1000000Maximum TCP Connections on a Server
A TCP connection is represented by a pair of kernel socket objects, identified by the four‑tuple (source IP, source port, destination IP, destination port). The theoretical maximum is 2³² × 2¹⁶ (over 200 trillion) connections, but practical limits are set by CPU and memory.
For a server with 4 GB RAM, an established idle connection consumes roughly 3.3 KB of memory, allowing about 1 million concurrent connections in the ESTABLISHED state. Real workloads that exchange data will reduce this number because of additional memory and CPU usage.
Maximum Connections a Client Can Initiate
Each outbound connection consumes a local port (0‑65535). After accounting for reserved ports, a single‑IP client can open ~64,000 connections to a single server port. If the client has multiple IPs, the limit scales by the number of IPs. If the server listens on multiple ports, the client limit multiplies by the number of server ports. The kernel parameter net.ipv4.ip_local_port_range can adjust the usable port range.
Additional Practical Considerations
The backlog queue length for incoming connections is controlled by net.core.somaxconn (default 128). Increasing it can reduce connection‑establishment latency under high concurrency.
After terminating a process, its ports may remain in TIME_WAIT; waiting briefly resolves the issue.
Binding a client socket to a specific port overrides the kernel’s automatic port selection and is generally discouraged.
Linux tracks sockets using hash tables and epoll uses red‑black trees for efficient management.
Real‑World Scenarios
For a long‑connection push service targeting 100 million users, assuming each connection uses ~3 KB, a 128 GB server could handle roughly 5 million idle connections, leaving ample memory for buffers. Approximately 20 such servers would be sufficient.
Common Error: "too many open files"
This error occurs when a process exceeds the kernel’s file‑descriptor limits. Resolving it involves increasing fs.file-max, soft nofile, and fs.nr_open while respecting their coupling relationships.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
