Operations 5 min read

How to Boost Nginx Concurrency to 100k+ Connections: Practical Tuning Guide

This guide explains how to maximize Nginx's concurrent handling capacity by configuring worker_processes, worker_connections, event settings, system limits, and I/O optimizations, providing concrete code snippets and kernel parameters for achieving tens of thousands of simultaneous connections.

Architect Chen
Architect Chen
Architect Chen
How to Boost Nginx Concurrency to 100k+ Connections: Practical Tuning Guide

worker_processes: Setting the Concurrency Ceiling

Nginx runs a multi‑process, event‑driven model; each worker_process is a separate OS process that can fully utilize a CPU core. Setting worker_processes to the number of CPU cores (or 1.5‑2× cores) maximizes CPU usage.

worker_processes auto; # recommended, automatically equals CPU core count
# or explicit: worker_processes 8; # if the server has 8 cores

Too few workers (e.g., the default 1) limit CPU utilization and concurrency, while too many workers cause excessive context‑switch overhead and degrade performance. In practice, match the worker count to the core count, adjusting upward for I/O‑bound workloads and keeping it near the core count for CPU‑bound tasks.

worker_connections and the Event Model

The worker_connections directive defines the maximum number of simultaneous connections a single worker can handle. Nginx’s event‑driven architecture allows a single process to manage many connections efficiently.

events {
    worker_connections 65535; # maximum connections per worker
    multi_accept on;        # accept multiple connections at once
    use epoll;             # Linux‑recommended event method (use kqueue on BSD/macOS)
}

The total theoretical concurrency is roughly worker_processes × worker_connections, minus a few internal connections. This value is bounded by the operating system’s file‑descriptor limit, so the system’s ulimit -n must be increased accordingly.

System Resource Limits (ulimit + Kernel Parameters)

Beyond Nginx configuration, the OS limits on open file descriptors and TCP parameters must be raised to avoid hitting hard caps.

# Increase the maximum number of file descriptors
fs.file-max = 2097152

# Increase TCP connection backlog
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 8192

# Reuse TIME_WAIT ports and shorten their timeout
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

# Expand the local port range for outbound connections
net.ipv4.ip_local_port_range = 1024 65535

# Disable TCP slow‑start after idle (helps with burst traffic)
net.ipv4.tcp_slow_start_after_idle = 0

These settings are typically placed in /etc/sysctl.conf and applied with sysctl -p. Additionally, increase the per‑user limits in /etc/security/limits.conf (e.g., * soft nofile 65535 and * hard nofile 65535) and ensure the shell’s ulimit -n reflects the new values.

Memory and Network I/O Optimizations

When serving large static files or proxying high‑traffic streams, enabling zero‑copy transmission reduces CPU overhead.

sendfile on;   # enable zero‑copy file transmission
tcp_nopush on; # combine TCP packets for efficient network I/O

With sendfile enabled, data moves directly from the kernel’s page cache to the network socket without copying to user space, improving throughput for static content.

Nginx performance optimization key points
Nginx performance optimization key points
System limits and kernel parameters
System limits and kernel parameters
Memory and I/O tuning
Memory and I/O tuning
performanceoperationsNginxSysctlTuning
Architect Chen
Written by

Architect Chen

Sharing over a decade of architecture experience from Baidu, Alibaba, and Tencent.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.