Boost Nginx Concurrency: Master the 4 Core Parameters for 100k+ Connections
This guide explains how to unleash Nginx’s full concurrency potential by configuring worker_processes, worker_connections, system limits (ulimit and sysctl), and multi_accept, providing practical code snippets, verification commands, and a ready‑to‑use high‑traffic configuration example.
1. worker_processes (Number of worker processes)
This setting determines how many CPU cores Nginx can utilize. Recommended values are the number of physical cores or auto (supported from Nginx 1.11+). Example:
<ol><li><code># Recommended: equal to CPU cores or auto</code></li><li><code>worker_processes auto;</code></li><li><code># Or set manually (recommended for production)</code></li><li><code>worker_processes 8; # adjust to your CPU cores</code></li></ol>Best practice: worker_processes = CPU physical cores × (1~2), using 2× for hyper‑threaded CPUs.
2. worker_connections (Maximum connections per worker)
Defines the maximum concurrent connections each worker can handle.
<ol><li><code>events {</code></li><li><code># default is 1024 – too low for high traffic</code></li><li><code> worker_connections 10240; # common range: 10240‑65535</code></li><li><code> # or higher if system limits allow</code></li><li><code> worker_connections 32768;</code></li><li><code>}</code></li></ol>The theoretical maximum concurrency equals worker_processes × worker_connections. For example, 8 processes × 10240 connections = 81,920 concurrent connections.
3. System‑level connection limits (ulimit and sysctl)
Nginx is also constrained by the operating system’s file‑descriptor and network limits.
Modify user‑level limits (ulimit):
<ol><li><code># Permanently edit /etc/security/limits.conf</code></li><li><code>* soft nofile 1048576</code></li><li><code>* hard nofile 1048576</code></li><li><code>root soft nofile 1048576</code></li><li><code>root hard nofile 1048576</code></li></ol>Modify kernel parameters (sysctl):
<ol><li><code># Add to /etc/sysctl.conf</code></li><li><code>fs.file-max = 1048576</code></li><li><code>net.core.somaxconn = 32768</code></li><li><code>net.ipv4.tcp_max_syn_backlog = 16384</code></li><li><code>net.core.netdev_max_backlog = 16384</code></li><li><code># TCP reuse optimizations (recommended)</code></li><li><code>net.ipv4.tcp_tw_reuse = 1</code></li><li><code>net.ipv4.tcp_tw_recycle = 0 # deprecated, do not enable</code></li><li><code>net.ipv4.tcp_fin_timeout = 15</code></li><li><code>net.ipv4.tcp_keepalive_time = 300</code></li></ol>4. multi_accept (Accept multiple connections at once)
Enables each worker to accept many new connections in a single batch instead of one‑by‑one.
<ol><li><code>events {</code></li><li><code> use epoll; # Linux‑recommended</code></li><li><code> worker_connections 10240;</code></li><li><code> multi_accept on; # default off – must enable</code></li><li><code>}</code></li></ol>Final high‑concurrency configuration (typical for 100k+ connections)
worker_processes auto; # or CPU cores ×2
worker_rlimit_nofile 1048576; # override ulimit
events {
use epoll;
worker_connections 32768;
multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
keepalive_requests 100000;
limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 50;
# additional optimizations can be added here
}To verify the effective limits, you can check Nginx’s maximum open files and current connections:
# Show Nginx max open files
cat /proc/$(cat /run/nginx.pid)/limits | grep "Max open files"
# Show current system connections
ss -s
netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
