Boost Nginx Throughput 10×: Key Settings for High‑Concurrency
This guide explains how to achieve up to ten‑fold performance gains in Nginx by aligning worker processes with CPU cores, raising file‑descriptor limits, extending keep‑alive settings, and enabling zero‑copy transmission features such as sendfile, tcp_nopush, and tcp_nodelay.
Nginx is a cornerstone of large‑scale architectures. To unlock its full potential in high‑concurrency scenarios, several configuration knobs must be tuned so that the server can fully utilize multi‑core CPUs, handle millions of simultaneous connections, and minimize network overhead.
1. worker_processes – Upper bound of CPU utilization
Each Nginx worker runs a single‑threaded event loop. To saturate all CPU cores, the number of workers should match the number of cores. The simplest way is to let Nginx detect the core count automatically.
worker_processes auto;
worker_cpu_affinity auto; # bind each worker to a specific CPUSetting auto makes Nginx start one worker per core, ensuring maximum CPU usage.
2. worker_connections – Core of concurrent connection capacity
In Linux, every network connection consumes a file descriptor. The total number of simultaneous connections is limited by both the per‑process descriptor limit and the worker_connections directive.
worker_rlimit_nofile : raise the maximum number of open files for a worker (commonly 65535 or higher).
worker_connections : maximum connections a single worker can handle.
Maximum concurrent connections = worker_processes * worker_connections.
worker_rlimit_nofile 65535; # break the default 1024 limit
events {
worker_connections 10240; # each worker supports ten‑thousand connections
use epoll; # efficient I/O multiplexing on Linux
multi_accept on; # accept many new connections at once
}3. Reduce handshake overhead – keepalive settings
Frequent TCP handshakes (three‑way handshake and four‑way termination) consume CPU and can cause many sockets to linger in TIME_WAIT. Extending keep‑alive keeps connections open longer and reduces the handshake frequency.
http {
keepalive_timeout 65; # keep the connection alive for 65 seconds
keepalive_requests 10000; # allow up to 10,000 requests per connection
}Increasing keepalive_requests from the default ~100 to thousands dramatically improves throughput under load.
4. Zero‑copy and network transmission optimizations
Enabling zero‑copy avoids copying data between kernel and user space, letting the kernel stream data directly from disk to the network interface.
sendfile on; # activate zero‑copy file transmission
tcp_nopush on; # send HTTP headers and body in one packet
tcp_nodelay on; # disable Nagle's algorithm for low‑latency small packetsThese settings lower CPU usage and improve network efficiency, especially for static file serving.
By applying the above four groups of directives—matching workers to CPUs, raising file‑descriptor limits, extending keep‑alive, and enabling zero‑copy—you can achieve roughly a ten‑fold performance increase for Nginx in high‑traffic environments.
Architect Chen
Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
