Operations 8 min read

Boost Nginx Performance: Essential Linux Kernel Tweaks for High Concurrency

This guide explains why default Linux kernel settings are insufficient for high‑traffic Nginx servers and provides a curated list of sysctl parameters—such as file‑max, tcp_tw_reuse, and net.core buffers—along with explanations and tuning tips to maximize concurrent connections and overall performance.

Raymond Ops
Raymond Ops
Raymond Ops
Boost Nginx Performance: Essential Linux Kernel Tweaks for High Concurrency

Default Linux kernel parameters are chosen for the most generic scenarios and do not suit the demands of high‑concurrency web servers, so they must be adjusted to let Nginx achieve higher performance.

When tuning the kernel there are many possible changes, but we usually tailor them to the specific role of Nginx—whether it serves static content, acts as a reverse proxy, or provides real‑time image thumbnailing. This article focuses on the most common TCP network parameters that enable Nginx to handle more concurrent requests.

First, edit

/etc/sysctl.conf

to add the following widely used settings:

<code># Original fields
net.ipv4.tcp_syncookies = 1
fs.file-max = 999999
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.ip_local_port_range = 1024 61000
net.ipv4.ip_local_reserved_ports = 34733,35738,45487,46520,57557,53207,53478
net.ipv4.tcp_rmem = 10240 87380 12582912
net.ipv4.tcp_wmem = 10240 87380 12582912
net.core.netdev_max_backlog = 8096
net.core.rmem_default = 6291456
net.core.wmem_default = 6291456
net.core.rmem_max = 12582912
net.core.wmem_max = 12582912
net.ipv4.tcp_max_syn_backlog = 1024</code>

After adding the parameters, run

sysctl -p

to apply them.

fs.file-max = 999999 : limits the maximum number of file descriptors a process (e.g., a worker) can open, directly affecting the maximum concurrent connections; adjust according to actual workload.

net.ipv4.tcp_tw_reuse = 1 : allows sockets in TIME‑WAIT state to be reused for new TCP connections, which is valuable for servers that accumulate many TIME‑WAIT sockets.

net.ipv4.tcp_keepalive_time = 600 : sets the interval (in seconds) for keepalive probes; the default is 2 hours, reducing it speeds up cleanup of dead connections.

net.ipv4.tcp_fin_timeout = 30 : defines the maximum time a socket stays in FIN‑WAIT‑2 after the server actively closes a connection.

net.ipv4.tcp_max_tw_buckets = 5000 : caps the number of TIME‑WAIT sockets the OS will keep; exceeding this limit causes immediate removal and a warning. The default (180 000) can degrade server performance if too many sockets remain.

net.ipv4.tcp_max_syn_backlog = 1024 : sets the maximum length of the SYN queue during the TCP three‑way handshake; increasing it helps prevent dropped connection attempts when Nginx is busy.

net.ipv4.ip_local_port_range = 1024 61000 : defines the range of local ports used for UDP and TCP connections.

net.ipv4.tcp_rmem = 10240 87380 12582912 and net.ipv4.tcp_wmem = 10240 87380 12582912 : specify the minimum, default, and maximum sizes of the TCP receive and send buffers (sliding window).

net.core.netdev_max_backlog = 8096 : sets the maximum size of the packet queue when the NIC receives data faster than the kernel can process it.

net.core.rmem_default = 6291456 and net.core.wmem_default = 6291456 : define the default sizes of the socket receive and send buffers.

net.core.rmem_max = 12582912 and net.core.wmem_max = 12582912 : define the maximum sizes of the socket receive and send buffers; these values should be balanced against total physical memory and the expected number of concurrent Nginx connections.

net.ipv4.tcp_syncookies = 1 : unrelated to performance, this setting helps mitigate SYN flood attacks.

Note: the size of the sliding window and socket buffers influences the number of concurrent connections because each TCP connection consumes memory for its buffers, which expand or shrink based on server load.

The maximum number of concurrent Nginx connections is ultimately determined by the

worker_processes

and

worker_connections

directives in

nginx.conf

.

Nginx is a performance‑oriented HTTP server that uses an asynchronous, event‑driven architecture (epoll on Linux, kqueue on BSD) instead of a one‑thread‑per‑connection model, resulting in lower memory usage and higher stability compared to Apache or lighttpd. Its modular design and rich ecosystem of core and third‑party modules make it highly configurable.

performanceoperationsLinuxNginxsysctlkernel-tuning
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.