Operations 15 min read

How Many Files and TCP Connections a Server Can Support and How to Tune Linux Limits

This article explains the Linux parameters that limit the maximum number of open files and TCP connections on a server, shows how to adjust those limits with configuration examples, and discusses practical constraints such as memory, port ranges, and real‑world scaling scenarios.

Architect's Guide
Architect's Guide
Architect's Guide
How Many Files and TCP Connections a Server Can Support and How to Tune Linux Limits

During a technical interview a candidate was asked how many TCP connections a single server can support, prompting a detailed discussion of Linux limits on open files and sockets.

Maximum Number of Open Files on a Server

Limiting Parameters

Linux limits the total number of open files through three key parameters:

fs.file-max – system‑wide maximum number of file descriptors (root is exempt).

soft nofile – per‑process soft limit.

fs.nr_open – per‑process hard limit.

These parameters are inter‑dependent: increasing soft nofile requires raising hard nofile , and fs.nr_open must be larger than hard nofile . Modifying kernel parameters with echo is discouraged because changes are lost after a reboot.

Example: Raising the Maximum Open Files

To allow a process to open one million file descriptors, edit /etc/sysctl.conf and /etc/security/limits.conf as follows:

vim /etc/sysctl.conf

fs.file-max=1100000   # system‑wide buffer
fs.nr_open=1100000    # must exceed hard nofile

# Apply changes
sysctl -p

vim /etc/security/limits.conf
# Set per‑user limits
soft nofile 1000000
hard nofile 1000000

Maximum Number of TCP Connections a Server Can Support

A TCP connection is represented by a pair of socket objects (the four‑tuple). The theoretical maximum is 2^32 × 2^16 (over two hundred trillion), but real limits are imposed by CPU and memory.

For ESTABLISHED idle connections, memory is the dominant factor. Each connection consumes roughly 3.3KB of RAM. On a 4 GB server, this yields about one million concurrent connections, assuming no data traffic.

In practice, active traffic, encryption, and application processing increase both memory and CPU usage, so achieving 1 M connections is optimistic; many deployments consider a few thousand as a realistic upper bound.

Maximum Connections a Client Machine Can Initiate

Clients consume a local port for each outbound connection. With a 16‑bit port range (0‑65535) and many reserved ports, a single‑IP client can open roughly 65 000 connections to a single server/port.

Scenarios:

Case 1 : One client IP, one server IP, one server port → up to ~65 000 connections.

Case 2 : Client has n IP addresses → up to n × 65535 connections.

Case 3 : Server listens on m ports → up to 65535 × m connections per client IP.

The kernel parameter net.ipv4.ip_local_port_range controls the usable port range and can be tuned to increase the client‑side limit.

Other Practical Considerations

The socket listen queue length is governed by net.core.somaxconn (default 128); increasing it reduces connection‑drop during high‑concurrency bursts.

Port reuse issues can cause “address already in use” errors after a process restart; waiting briefly or adjusting TIME_WAIT settings helps.

Binding a client socket to a specific port overrides the kernel’s automatic port selection and is generally discouraged.

Related Real‑World Problems

"Too many open files" error : Occurs when a process exceeds its file‑descriptor limits. Resolve by raising fs.file-max , soft nofile , and fs.nr_open while respecting their coupling relationships.

Estimating server capacity : For a 4 GB server, roughly 1 M idle TCP connections are possible; for a 128 GB server, about 5 M connections can be maintained, allowing a 100 M‑user push service with ~20 servers.

Code Example: Client Socket Creation

public static void main(String[] args) throws IOException {
    SocketChannel sc = SocketChannel.open();
    // Optional bind (generally not recommended)
    sc.bind(new InetSocketAddress("localhost", 9999));
    sc.connect(new InetSocketAddress("localhost", 8080));
    System.out.println("waiting..........");
}

Linux treats sockets as files; each opened socket consumes a file descriptor and memory, which is why the kernel enforces limits at multiple levels.

TCPLinuxnetwork performancesystem tuningfile descriptorsserver limits
Architect's Guide
Written by

Architect's Guide

Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.