How to Boost Nginx Concurrency from 5K to 50K: Key Config Tweaks
This guide explains how to dramatically increase Nginx's concurrent handling capacity by tuning worker processes, connections, keep‑alive settings, and high‑performance I/O options, providing concrete configuration examples and practical advice for high‑traffic deployments.
Worker Processes and Connections
worker_processes defines the number of master worker processes; binding them to CPU cores (using auto or explicit affinity) maximizes performance by reducing context switches.
worker_processes auto;
worker_cpu_affinity auto;worker_connections sets the maximum number of simultaneous connections each worker can handle. The default of 1024 is far too low for high‑traffic scenarios; raising it to a large value (e.g., 65535) directly expands Nginx's concurrency ceiling.
events {
use epoll; # Linux high‑performance I/O model
worker_connections 65535;
}keepalive_timeout and keepalive_requests control HTTP keep‑alive behavior. A timeout around 65 seconds balances resource usage and latency, while allowing up to 10 000 requests per persistent connection prevents excessive handshakes.
http {
keepalive_timeout 65;
keepalive_requests 10000; # up to 10k requests per connection
}High‑Performance I/O Settings
The trio of sendfile, tcp_nopush, and tcp_nodelay forms the core of Nginx's efficient data transmission.
sendfile on enables zero‑copy file transfer, moving data directly from disk to the kernel socket buffer, eliminating user‑space copying and reducing CPU load.
tcp_nopush on (similar to FreeBSD’s TCP_CORK) buffers small HTTP packets until a full TCP segment is ready, decreasing packet count and increasing throughput. It works only when sendfile is enabled.
tcp_nodelay on disables Nagle’s algorithm, ensuring that small packets are sent immediately, complementing tcp_nopush for low‑latency communication.
sendfile on;
# Must be enabled for tcp_nopush
tcp_nopush on;
# Disable Nagle’s algorithm
tcp_nodelay on;By combining the above settings—optimizing worker count, connections, keep‑alive parameters, and enabling zero‑copy plus TCP optimizations—Nginx can scale from a few thousand to tens of thousands of concurrent connections while maintaining low CPU usage and latency.
Architect Chen
Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
