Operations 15 min read

Maximum Number of TCP Connections a Server Can Support and Related Linux Limits

This article explains how Linux kernel parameters, memory size, and file descriptor limits determine the maximum number of TCP connections a server or client can handle, provides configuration examples for increasing those limits, and discusses practical considerations such as port ranges and connection overhead.

Architecture Digest
Architecture Digest
Architecture Digest
Maximum Number of TCP Connections a Server Can Support and Related Linux Limits

Maximum Number of Files a Server Can Open

Limiting Parameters

In Linux, everything is a file, so the maximum number of files a server can open is limited by three parameters:

fs.file-max (system‑wide) : total number of files the whole system can open. The root user is not limited by this value.

soft nofile (per‑process) : maximum number of files a single process can open. It can only be set globally, not per‑user.

fs.nr_open (per‑process) : another limit for the maximum number of files a process can open, configurable per user.

When adjusting these values, keep the following points in mind:

If you increase soft nofile , you must also raise the hard nofile limit; the effective value is the lower of the two.

If you raise hard nofile , fs.nr_open must be increased accordingly; otherwise users may be unable to log in.

Modifying fs.nr_open with an echo "xxx" > /proc/sys/fs/nr_open command is not persistent—after a reboot the change is lost, potentially locking out users.

Example of Adjusting Maximum Open Files on Server

To allow a process to open 1,000,000 file descriptors, you can edit /etc/sysctl.conf as follows (remember to apply with sysctl -p after editing):

vim /etc/sysctl.conf

fs.file-max=1100000   // system‑wide, leave some buffer
fs.nr_open=1100000   // must be larger than hard nofile

Then edit /etc/security/limits.conf to set the per‑process limits:

vim /etc/security/limits.conf

# set both soft and hard limits for all users
soft nofile 1000000
hard nofile 1000000

Maximum Number of Connections a Server Can Support

A TCP connection is essentially a pair of kernel socket objects identified by the four‑tuple (source IP, source port, destination IP, destination port). The theoretical maximum is 2^32 (IP addresses) × 2^16 (ports) ≈ 2.1 × 10^12 connections, but real servers are limited by CPU and memory.

For connections that are only in the ESTABLISHED state and not transmitting data, memory is the primary constraint. On a 4 GB server, each ESTABLISHED connection consumes about 3.3 KB of RAM, allowing roughly 1 million concurrent idle connections. Actual numbers will be lower when data processing and CPU usage are considered.

Maximum Number of Connections a Client Machine Can Initiate

Clients consume a local port for each connection. The port range is 0‑65535, but many ports are reserved, leaving roughly 64 000 usable ports per IP address.

Scenarios:

Case 1: One client IP, one server IP, one server port → up to ~65 535 connections.

Case 2: Client has n IP addresses → up to n × 65 535 connections.

Case 3: Server listens on m ports → up to 65 535 × m connections.

The client port range can be expanded by adjusting the kernel parameter net.ipv4.ip_local_port_range . With proper configuration, a client can also initiate over a million connections.

Other Considerations

The length of the TCP listen backlog is controlled by net.core.somaxconn (default 128). Increasing it can reduce connection drops under high concurrency.

After terminating a process with Ctrl+C, the port may remain in TIME_WAIT; waiting a short period allows the OS to reclaim it.

Clients should generally avoid calling bind() to let the kernel choose an available port automatically.

Related Practical Issues

"Too many open files" errors occur when a process exceeds the allowed number of file descriptors. The solution is to increase fs.file-max , soft nofile , and fs.nr_open , keeping their inter‑dependencies in mind.

When estimating capacity for a large‑scale push service (e.g., 100 million idle long‑lived connections), memory is the dominant factor. Assuming 128 GB RAM per server and ~3 KB per connection, a single server can handle about 5 million connections, so roughly 20 servers would suffice for 100 million users.

Code Samples

public static void main(String[] args) throws IOException {
    SocketChannel sc = SocketChannel.open();
    // client can optionally bind to a specific local port
    sc.bind(new InetSocketAddress("localhost", 9999));
    sc.connect(new InetSocketAddress("localhost", 8080));
    System.out.println("waiting..........");
}
operationsTCPlinuxNetworkingfile descriptorsServer Capacity
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.