Unlock Nginx’s Full Potential: High‑Performance Reverse Proxy, Load Balancing & Cache Tuning

This guide walks through the latest Nginx 1.26.x features, environment prerequisites, compilation options, worker and kernel tuning, reverse‑proxy setup, load‑balancing algorithms, advanced caching strategies, TLS hardening, high‑availability with Keepalived, common pitfalls, monitoring, and troubleshooting techniques for production‑grade deployments.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Unlock Nginx’s Full Potential: High‑Performance Reverse Proxy, Load Balancing & Cache Tuning

Overview

Nginx 1.26.x (mainline) adds HTTP/3, dynamic module loading and improved memory handling over the 1.24.x stable branch. In micro‑service architectures Nginx is the front‑door for TLS termination, load balancing, caching and request forwarding. A mis‑configured instance becomes a bottleneck regardless of backend performance.

Key technical traits

Event‑driven, non‑blocking I/O – uses epoll/kqueue; optional aio threads offloads file I/O.

Low memory per connection – ~10‑20 KB, so 10 000 connections need ~200 MB.

Zero‑copy transfer – sendfile moves data in kernel space.

Modular core – functionality added via compiled‑in or dynamic modules.

Environment Requirements

OS: Ubuntu 22.04+ or CentOS Stream 8+ (CentOS 7 EOL).

Nginx: 1.26.x mainline (1.24.x stable is still usable).

OpenSSL: 3.0+ (TLS 1.3 support).

CPU: 4 cores + (worker processes should match physical cores).

Memory: 4 GB + (8 GB + recommended for cache).

Disk: SSD, 50 GB +; cache directory on a dedicated mount.

Detailed Steps

Compile Nginx from source

# /opt/scripts/build-nginx.sh
#!/bin/bash
set -euo pipefail

NGINX_VERSION="1.26.2"
BUILD_DIR="/tmp/nginx-build"
INSTALL_PREFIX="/etc/nginx"

# Install build dependencies (Debian/Ubuntu example)
apt-get install -y \
    build-essential \
    libpcre3-dev \
    libssl-dev \
    zlib1g-dev \
    libgd-dev \
    libgeoip-dev

mkdir -p "${BUILD_DIR}"
cd "${BUILD_DIR}"

wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz"
 tar -xzf "nginx-${NGINX_VERSION}.tar.gz"
cd "nginx-${NGINX_VERSION}"

./configure \
    --prefix=/etc/nginx \
    --sbin-path=/usr/sbin/nginx \
    --modules-path=/usr/lib64/nginx/modules \
    --conf-path=/etc/nginx/nginx.conf \
    --error-log-path=/var/log/nginx/error.log \
    --http-log-path=/var/log/nginx/access.log \
    --pid-path=/var/run/nginx.pid \
    --with-threads \
    --with-file-aio \
    --with-http_v2_module \
    --with-http_v3_module \
    --with-http_ssl_module \
    --with-http_realip_module \
    --with-http_stub_status_module \
    --with-http_gzip_static_module \
    --with-http_cache_purge_module \
    --with-stream \
    --with-stream_ssl_module \
    --with-pcre-jit \
    --with-ld-opt="-Wl,-rpath,/usr/local/lib"

make -j "$(nproc)"
make install

Global worker and connection tuning

# /etc/nginx/nginx.conf (global section)
user nginx;
worker_processes auto;            # match CPU cores
worker_cpu_affinity auto;          # bind workers to CPUs
worker_rlimit_nofile 65536;       # must be >= worker_connections * 2
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 16384;    # total connections per worker
    use epoll;                   # most efficient on Linux
    multi_accept on;            # accept many connections per syscall
}

Kernel parameter tuning (sysctl)

# /etc/sysctl.d/99-nginx-tuning.conf
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.tcp_tw_reuse = 1
fs.file-max = 1000000
net.core.netdev_max_backlog = 65535
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.ip_local_port_range = 10000 65535

# Apply
sysctl -p /etc/sysctl.d/99-nginx-tuning.conf

Reverse proxy – upstream definition

# /etc/nginx/conf.d/upstream.conf
upstream backend_api {
    keepalive 64;                     # idle TCP connections to backends
    keepalive_requests 1000;          # max requests per keepalive connection
    keepalive_timeout 60s;
    server 10.0.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.1.12:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.1.13:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 10.0.1.14:8080 backup;    # used only when all primary servers fail
}

upstream backend_static {
    least_conn;                       # send request to server with fewest active connections
    keepalive 32;
    server 10.0.1.21:80 weight=1;
    server 10.0.1.22:80 weight=1;
}

Proxy_pass details

The trailing slash in proxy_pass changes URI rewriting: proxy_pass http://backend/; – strips the location prefix. proxy_pass http://backend; – preserves the full request URI.

# /etc/nginx/conf.d/proxy-detail.conf
server {
    listen 80;
    server_name api.example.com;

    location /api/ {
        proxy_pass http://backend_api/;   # trailing slash – strip "/api"
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Connection "";   # prevent upstream from closing keepalive
        proxy_connect_timeout 5s;
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
        proxy_buffering on;
        proxy_buffer_size 16k;
        proxy_buffers 8 32k;
        proxy_busy_buffers_size 64k;
    }
}

WebSocket proxy

# /etc/nginx/conf.d/websocket.conf
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 443 ssl;
    server_name ws.example.com;

    location /ws/ {
        proxy_pass http://backend_ws;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_read_timeout 3600s;   # long‑lived connections
        proxy_buffering off;       # real‑time forwarding
    }
}

gRPC proxy

# /etc/nginx/conf.d/grpc.conf
upstream grpc_backend {
    server 10.0.1.11:50051;
    server 10.0.1.12:50051;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name grpc.example.com;
    ssl_certificate /etc/nginx/ssl/grpc.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/grpc.example.com.key;
    include /etc/nginx/conf.d/ssl-base.conf;
    include /etc/nginx/conf.d/ssl-performance.conf;

    location / {
        grpc_pass grpc://grpc_backend;
        error_page 502 = /error502grpc;
    }

    location = /error502grpc {
        internal;
        default_type application/grpc;
        add_header grpc-status 14;   # UNAVAILABLE
        add_header content-length 0;
        return 204;
    }
}

Load‑Balancing Strategies

Round‑Robin, Weighted, IP‑hash

# /etc/nginx/conf.d/lb-strategies.conf
upstream rr_backend {
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
    server 10.0.1.13:8080;
}

upstream weighted_backend {
    server 10.0.1.11:8080 weight=2;   # 50 % traffic
    server 10.0.1.12:8080 weight=1;   # 25 % traffic
    server 10.0.1.13:8080 weight=1;   # 25 % traffic
}

upstream iphash_backend {
    ip_hash;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
    server 10.0.1.13:8080;
}

Least‑conn and Consistent Hash

# /etc/nginx/conf.d/lb-advanced.conf
upstream leastconn_backend {
    least_conn;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
    server 10.0.1.13:8080;
    keepalive 32;
}

upstream hash_backend {
    hash $request_uri consistent;   # consistent hashing for cache affinity
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
    server 10.0.1.13:8080;
}

upstream user_hash_backend {
    hash $http_x_user_id consistent;   # route by custom header (e.g., user ID)
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
}

Health checks

Open‑source Nginx only provides passive health checks (mark a server down after max_fails failures). Active checks require Nginx Plus or the third‑party nginx_upstream_check_module.

# /etc/nginx/conf.d/health-check.conf
upstream backend_with_check {
    server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
    server 10.0.1.12:8080 max_fails=3 fail_timeout=30s;
    server 10.0.1.13:8080 max_fails=3 fail_timeout=30s;
    keepalive 32;
}

Cache Optimization

Cache zones

# /etc/nginx/nginx.conf (http block)
proxy_cache_path /data/nginx/cache \
    levels=1:2 \
    keys_zone=cache_main:100m \
    max_size=10g \
    inactive=60m \
    use_temp_path=off;

proxy_cache_path /data/nginx/cache_static \
    levels=1:2 \
    keys_zone=cache_static:50m \
    max_size=20g \
    inactive=7d \
    use_temp_path=off;

API cache policy

# /etc/nginx/conf.d/cache-policy.conf
server {
    listen 80;
    server_name www.example.com;

    location /api/ {
        proxy_pass http://backend_api/;
        proxy_cache cache_main;
        proxy_cache_key "$scheme$host$request_uri";
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_valid any 30s;
        proxy_cache_bypass $cookie_session $http_authorization;
        proxy_no_cache $cookie_session $http_authorization;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_lock on;
        proxy_cache_lock_timeout 5s;
        add_header X-Cache-Status $upstream_cache_status;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
    }
}

Static asset caching

# /etc/nginx/conf.d/static-cache.conf
server {
    listen 80;
    server_name static.example.com;
    root /var/www/static;

    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        gzip_static on;
        sendfile on;
        tcp_nopush on;
    }

    location ~* \.(html)$ {
        expires -1;
        add_header Cache-Control "no-cache, no-store, must-revalidate";
    }
}

Cache warm‑up and purge

# /opt/scripts/cache-warm.sh
#!/bin/bash
set -euo pipefail
NGINX_HOST="http://127.0.0.1"
URL_LIST="/opt/scripts/warm-urls.txt"
CONCURRENCY=10

warm_cache() {
    local url="$1"
    curl -s -o /dev/null -w "%{http_code} %{url_effective}
" \
        -H "Host: www.example.com" \
        "${NGINX_HOST}${url}"
}
export -f warm_cache
cat "${URL_LIST}" | xargs -P "${CONCURRENCY}" -I{} bash -c 'warm_cache "$@"' _ {}

echo "Cache warm‑up completed"
# /etc/nginx/conf.d/cache-purge.conf (requires ngx_cache_purge module)
server {
    listen 80;
    server_name cache-admin.internal;
    allow 10.0.0.0/8;
    allow 192.168.0.0/16;
    deny all;

    location ~ /purge(/.*) {
        proxy_cache_purge cache_main "$scheme$host$1";
    }
}

SSL/TLS Optimization

TLS 1.3 and cipher suite

# /etc/nginx/conf.d/ssl-base.conf
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;   # generate with: openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
ssl_ecdh_curve X25519:prime256v1:secp384r1;

Session cache, tickets and OCSP stapling

# /etc/nginx/conf.d/ssl-performance.conf
ssl_session_cache shared:SSL:10m;   # ~40 000 sessions
ssl_session_timeout 1d;
ssl_session_tickets off;          # enable only for single‑node deployments
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

HTTP/2 and HTTP/3

# /etc/nginx/conf.d/http2-http3.conf
server {
    listen 443 ssl;
    http2 on;                     # Nginx 1.25.1+ syntax
    listen 443 quic reuseport;    # HTTP/3 (requires --with-http_v3_module)
    server_name www.example.com;
    ssl_certificate /etc/nginx/ssl/www.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/www.example.com.key;
    include /etc/nginx/conf.d/ssl-base.conf;
    include /etc/nginx/conf.d/ssl-performance.conf;
    add_header Alt-Svc "h3=\":443\"; ma=86400";   # advertise HTTP/3 support
    add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
    location / { proxy_pass http://backend_api; }
}

# HTTP → HTTPS redirect
server {
    listen 80;
    server_name www.example.com;
    return 301 https://$host$request_uri;
}

Performance & Security Hardening

Performance configuration

# /etc/nginx/conf.d/performance.conf
http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    aio threads=default;
    aio_write on;
    open_file_cache max=10000 inactive=30s;
    open_file_cache_valid 60s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
    gzip on;
    gzip_comp_level 6;
    gzip_min_length 1024;
    gzip_vary on;
    gzip_proxied any;
    gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/rss+xml application/atom+xml image/svg+xml font/truetype font/opentype application/vnd.ms-fontobject;
}

Security hardening

# /etc/nginx/conf.d/security.conf
server {
    server_tokens off;                               # hide version
    if ($request_method !~ ^(GET|POST|HEAD|PUT|DELETE|PATCH|OPTIONS)$) { return 405; }
    add_header X-Frame-Options SAMEORIGIN always;
    add_header X-Content-Type-Options nosniff always;
    add_header X-XSS-Protection "1; mode=block" always;
    client_max_body_size 10m;
    large_client_header_buffers 4 8k;
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
    location /login {
        limit_req zone=login_limit burst=10 nodelay;
        limit_conn conn_limit 5;
        proxy_pass http://backend_api;
    }
    location ~ /\. { deny all; access_log off; log_not_found off; }
    location ~* \.(bak|conf|dist|fla|inc|ini|log|psd|sh|sql|swp)$ { deny all; }
}

High Availability (Keepalived)

# /etc/keepalived/keepalived.conf (master node example)
vrrp_script check_nginx {
    script "/opt/scripts/check-nginx.sh"
    interval 2
    weight -20
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass nginx_ha_2024
    }
    virtual_ipaddress { 192.168.1.100/24 }
    track_script { check_nginx }
}
# /opt/scripts/check-nginx.sh – health‑check script
#!/bin/bash
set -euo pipefail

if ! pgrep -x nginx > /dev/null; then
    systemctl start nginx
    sleep 2
    pgrep -x nginx > /dev/null || exit 1
fi

http_code=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1/health)
if [ "$http_code" != "200" ]; then
    exit 1
fi
exit 0

Best Practices & Caveats

Always end proxy_pass with a trailing slash only when you want the location prefix stripped. worker_connections counts both client and upstream sockets; effective concurrent clients ≈ worker_connections / 2.

proxy_cache_path
keys_zone

of 100 MiB stores roughly 800 k keys; actual content lives on disk.

Common error patterns

502 Bad Gateway – upstream unavailable or connection timeout; increase proxy_connect_timeout and verify upstream health.

504 Gateway Timeout – upstream response too slow; raise proxy_read_timeout or optimise backend.

413 Request Entity Too Large – request exceeds client_max_body_size; increase the limit.

worker_connections are not enough – raise worker_connections and worker_rlimit_nofile.

open() failed (24: Too many open files) – increase system fs.file‑max and per‑process limits.

Monitoring & Troubleshooting

Detailed log format

# /etc/nginx/nginx.conf – log format
log_format detailed '$remote_addr - $remote_user [$time_local] "$request" '
    '$status $body_bytes_sent "$http_referer" "$http_user_agent" '
    'rt=$request_time uct=$upstream_connect_time uht=$upstream_header_time urt=$upstream_response_time cs=$upstream_cache_status ua=$upstream_addr';
access_log /var/log/nginx/access.log detailed buffer=32k flush=5s;
error_log /var/log/nginx/error.log warn;

stub_status endpoint

# /etc/nginx/conf.d/status.conf
server {
    listen 127.0.0.1:8080;
    server_name localhost;
    location /nginx_status {
        stub_status;
        access_log off;
        allow 127.0.0.1;
        allow 10.0.1.0/24;
        deny all;
    }
}

Typical output shows active connections, accepts, handled, requests and the breakdown of reading/writing/waiting.

Prometheus exporter

# Install nginx‑prometheus‑exporter
wget https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v1.3.0/nginx-prometheus-exporter_1.3.0_linux_amd64.tar.gz
 tar -xzf nginx-prometheus-exporter_1.3.0_linux_amd64.tar.gz
./nginx-prometheus-exporter \
    -nginx.scrape-uri=http://127.0.0.1:8080/nginx_status \
    -web.listen-address=:9113
# Prometheus alert rules (nginx.yml)
groups:
- name: nginx
  rules:
  - alert: NginxHighActiveConnections
    expr: nginx_connections_active > 10000
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Nginx active connections too high"
      description: "Current active connections {{ $value }} exceed 10 000"
  - alert: NginxHighErrorRate
    expr: rate(nginx_http_requests_total{status=~"5.."}[5m]) / rate(nginx_http_requests_total[5m]) > 0.05
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "Nginx 5xx error rate high"
      description: "5xx error rate {{ $value }} > 5%"

Key metrics & thresholds

Active connections – normal < 5 000, alert > 10 000.

Request latency – normal < 200 ms, alert > 1 s (P99).

5xx error rate – normal < 0.1 %, alert > 1 %.

Upstream response time – normal < 100 ms, alert > 500 ms.

Cache hit rate – normal > 80 %, alert < 50 %.

Waiting (keep‑alive) connections – normal < 1 000, alert > 5 000.

Configuration backup script

# /opt/scripts/nginx-backup.sh
#!/bin/bash
set -euo pipefail

BACKUP_DIR="/opt/backups/nginx"
NGINX_CONF_DIR="/etc/nginx"
RETENTION_DAYS=30
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/nginx_conf_${TIMESTAMP}.tar.gz"

mkdir -p "${BACKUP_DIR}"
# Exclude private keys from the regular backup
 tar -czf "${BACKUP_FILE}" \
    --exclude="${NGINX_CONF_DIR}/ssl/*.key" \
    "${NGINX_CONF_DIR}"

echo "Backup saved to ${BACKUP_FILE}"
# Cleanup old backups
find "${BACKUP_DIR}" -name "nginx_conf_*.tar.gz" -mtime +${RETENTION_DAYS} -delete

Advanced Learning Paths

OpenResty / njs – embed Lua or JavaScript for JWT validation, dynamic routing, A/B testing.

Nginx Unit – a dynamic application server for Python, PHP, Node.js without reloads.

eBPF + Nginx – kernel‑level tracing for zero‑overhead performance and security monitoring.

References

Nginx official documentation – authoritative command reference.

Nginx source code – deepest insight into internal mechanisms.

nginx‑prometheus‑exporter – official Prometheus integration.

Mozilla SSL Configuration Generator – up‑to‑date TLS recommendations.

CachingNginxreverse proxyload-balancingperformance-tuning
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.