How to Tune Nginx for Million‑Level Concurrency: Practical Configurations

This guide explains how to configure Nginx and Linux kernel parameters to support up to a million concurrent connections, covering worker processes, connection limits, system file‑descriptor settings, caching strategies, static file handling, and key reverse‑proxy directives with concrete code examples.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
How to Tune Nginx for Million‑Level Concurrency: Practical Configurations

Nginx Concurrency Basics

In high‑traffic architectures, Nginx is typically the first entry point. The maximum number of connections is roughly worker_processes × worker_connections. For a 16‑core CPU, a common configuration is:

worker_processes  auto;

events {
    use epoll;
    worker_connections  100000;  # 100k per worker
    multi_accept        on;
}

This yields 16 × 100000 = 1,600,000 possible connections, theoretically enough for a million‑level load, provided the OS permits that many file descriptors.

System‑Level Optimizations

Linux defaults often become bottlenecks. Adjust the kernel to allow more open files and faster socket recycling:

# Core kernel tweaks
net.ipv4.tcp_tw_reuse = 1                     # reuse TIME_WAIT sockets
net.ipv4.ip_local_port_range = 1024 65535    # enlarge temporary port range
net.core.somaxconn = 65535                    # increase backlog queue
net.core.netdev_max_backlog = 65535          # enlarge NIC backlog
fs.file-max = 2097152                         # raise global file‑descriptor limit

Ensure sufficient RAM, as each connection consumes tens to hundreds of kilobytes, meaning a million connections may require several gigabytes of memory.

Cache and Static Content Strategies

To keep CPU usage low, offload static resources and cache dynamic responses. Static files can be served directly with sendfile and the kernel page cache. For dynamic content, place an Nginx cache in front of the backend so that only cache misses reach the application.

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:1g max_size=20g inactive=1h use_temp_path=off;

server {
    location /api/ {
        proxy_pass http://backend;
        proxy_cache STATIC;
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating;
        add_header X-Cache $upstream_cache_status;
    }

    location /static/ {
        root /data/www;
        expires 30d;
    }
}

Key HTTP / Reverse‑Proxy Settings

Make each connection lightweight by reducing timeouts, buffers, and logging overhead:

# Connection handling
keepalive_timeout       60;
keepalive_requests      10000;
client_body_timeout     10;
client_header_timeout   10;
send_timeout            10;
client_max_body_size    10m;
client_body_buffer_size 64k;
client_header_buffer_size 4k;
large_client_header_buffers 4 16k;

# Logging (optional, keep concise)
access_log /var/log/nginx/access.log main buffer=64k flush=1s;

These directives balance performance and resource consumption, allowing Nginx to sustain high request rates while minimizing CPU and I/O load.

Summary

By aligning Nginx worker settings with kernel limits, expanding file‑descriptor capacities, employing aggressive caching, and fine‑tuning HTTP timeouts and buffers, a single Nginx instance can reliably handle millions of concurrent connections. The provided configuration snippets serve as a practical starting point for production deployments.

CachingPerformance TuningHigh ConcurrencyNginxreverse proxyLinux kernel
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.