Mastering Nginx High-Concurrency: Architecture, Configs, and Best Practices
This article explains how Nginx handles massive traffic through its multi‑process, event‑driven design, asynchronous non‑blocking I/O, and practical configuration tweaks such as worker settings, keepalive, caching, rate limiting, file transfer optimizations, and Gzip compression.
Nginx High-Concurrency Overview
In modern internet applications, high traffic is common in flash sales, live streaming, and large e‑commerce events, posing significant technical challenges. Nginx, beyond being a web server, acts as the traffic manager of modern systems and plays a crucial role in high‑concurrency architectures.
Multi‑Process + Event‑Driven Architecture
Nginx uses a master‑worker model. The master process manages worker processes—starting, stopping, and reloading them—and loads the configuration file. Each worker process handles client requests, performs network I/O, and operates independently, so a failure in one worker does not affect others.
Asynchronous Non‑Blocking Mechanism
By combining epoll with non‑blocking I/O, Nginx can efficiently handle a large number of concurrent connections within a single thread, avoiding the overhead of multithread context switches.
Each worker is single‑threaded, using epoll (on Linux) for I/O multiplexing. Epoll monitors many file descriptors and notifies the thread only when events occur, offering better performance than select or poll under heavy loads.
Nginx’s non‑blocking I/O returns immediately from I/O calls, allowing the thread to continue processing other tasks while the kernel notifies completion via events.
Practical Configuration Examples
1. Enable multi‑process and efficient event model
worker_processes auto;
worker_cpu_affinity auto;
events {
use epoll; # Recommended on Linux
worker_connections 10240;
multi_accept on;
}Key directives:
use epoll: high‑performance event model on Linux.
worker_connections: maximum connections per worker (theoretical concurrency = workers × connections).
multi_accept on: allows a worker to accept multiple connections at once.
2. Improve TCP connection efficiency (keepalive & connection reuse)
keepalive_timeout 65;
keepalive_requests 1000;
tcp_nodelay on;
tcp_nopush on; keepalive_timeout: keeps connections alive to reduce handshake overhead.
keepalive_requests: max requests per keepalive connection.
tcp_nodelay: speeds up small packet transmission.
tcp_nopush: optimizes packet sending to reduce packet count.
3. Enable caching to relieve backend pressure
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static_cache:50m inactive=1h max_size=2g;
server {
location /static/ {
proxy_cache static_cache;
proxy_pass http://backend_servers;
proxy_cache_valid 200 1h;
proxy_cache_use_stale error timeout updating;
}
} proxy_cache_path: defines cache storage, hierarchy, and size.
proxy_cache_valid: sets cache duration per status code.
proxy_cache_use_stale: serves stale content when the backend fails.
4. Rate limiting and protection
limit_conn_zone $binary_remote_addr zone=addr_limit:10m;
limit_conn addr_limit 20;
limit_req_zone $binary_remote_addr zone=req_limit:10m rate=10r/s;
limit_req zone=req_limit burst=20 nodelay; limit_conn: caps maximum connections per IP.
limit_req: caps request rate per IP to prevent abuse.
Typical scenarios include preventing malicious crawlers, limiting high‑frequency API calls, and smoothing traffic spikes.
5. Optimize file transfer performance
sendfile on;
aio on;
output_buffers 1 512k; sendfile: enables zero‑copy to reduce CPU usage.
aio: activates asynchronous file I/O for large files.
6. Enable Gzip compression to save bandwidth
gzip on;
gzip_types text/plain application/json application/javascript text/css;
gzip_comp_level 5;Suitable for static assets and JSON responses; avoid compressing already compressed formats such as .zip or .jpg.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.