How Many Files and TCP Connections Can a Linux Server Actually Handle?
This article explains the Linux kernel parameters that limit the number of open files and TCP connections on a server, shows how to calculate practical limits based on memory and configuration, and provides step‑by‑step examples for adjusting those limits safely.
Maximum Files a Server Can Open
1. Limiting Parameters
In Linux, the maximum number of open files is controlled by three parameters: fs.file-max (system‑wide limit) – describes the total number of files the kernel can keep open. The root user is not restricted by this value. soft nofile (process‑level) – limits the maximum number of files a single process can open; it can be set only once per system. fs.nr_open (process‑level) – also limits per‑process open files but can be configured per user.
These parameters are inter‑related, so when adjusting one, the others must be considered:
If you increase soft nofile, you must also raise hard nofile because the effective limit is the lower of the two.
If you raise hard nofile, fs.nr_open must be larger than the new hard nofile value.
Modifying fs.nr_open via an echo command is not persistent; the value will be lost after a reboot.
Do not use echo to change kernel parameters!
2. Example: Raising the Maximum Open Files
To allow a process to open one million file descriptors, edit /etc/sysctl.conf:
fs.file-max=1100000 // system‑wide, leave some buffer
fs.nr_open=1100000 // process‑level, must be larger than hard nofileApply the changes with: sysctl -p Then edit /etc/security/limits.conf:
# set user‑level limits
soft nofile 1000000
hard nofile 1000000Maximum TCP Connections a Server Can Support
A TCP connection is represented by a pair of socket objects (the TCP four‑tuple). The theoretical maximum is 2^32 × 2^16, but practical limits are imposed by CPU and memory.
Assuming only idle ESTABLISH connections, each consumes about 3.3 KB of memory. On a 4 GB server, this yields roughly 1 000 000 concurrent connections, though real workloads will consume more resources.
In practice, the number of usable connections depends on the workload; heavy data processing can reduce the feasible count dramatically.
Maximum Connections a Client Machine Can Initiate
A client consumes one local port per connection. With a single IP and a single server port, the theoretical limit is 65 535 connections. If the client has n IP addresses, the limit becomes n × 65535. If the server listens on m ports, the limit is 65535 × m. The actual usable range may be lower due to net.ipv4.ip_local_port_range restrictions.
Other Relevant Settings
The length of the TCP listen backlog is controlled by net.core.somaxconn (default 128). Increasing this value can reduce connection drops under high concurrency.
When a process is terminated with Ctrl+C, the port may remain in TIME_WAIT; waiting a short period before restarting resolves the “port already in use” error.
Binding a client socket to a specific port changes the kernel’s port selection strategy and is generally discouraged.
Practical Scenarios
Typical “too many open files” errors occur when the process exceeds the configured limits. Resolving the issue involves increasing fs.file-max, soft nofile, and fs.nr_open while respecting their coupling relationships.
For a long‑connection push service serving 100 million users, assuming each idle connection uses ~3 KB, a 128 GB server can handle about 5 million connections. Approximately 20 such servers would be sufficient.
Reference: Deep Understanding of Linux Networking
Architect's Tech Stack
Java backend, microservices, distributed systems, containerized programming, and more.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
