Operations 9 min read

Configuring Nginx Reverse Proxy for Persistent (Keep‑Alive) Connections and Performance Optimization

This article explains how to configure Nginx as a reverse proxy to maintain long‑lived HTTP/1.1 keep‑alive connections between client and Nginx and between Nginx and upstream servers, covering required directives, upstream and location settings, performance implications for high QPS workloads, and advanced WebSocket handling.

JD Tech
JD Tech
JD Tech
Configuring Nginx Reverse Proxy for Persistent (Keep‑Alive) Connections and Performance Optimization

After HTTP/1.1, the protocol supports persistent (keep‑alive) connections, allowing multiple requests and responses over a single TCP link; when Nginx acts as a reverse proxy or load balancer, client‑side long connections are often turned into short connections to the backend, so specific Nginx settings are required to preserve the long‑connection behavior.

Requirements : (i) the client‑to‑Nginx link must stay alive, which means the client sends a Connection: keep-alive header and Nginx enables keep‑alive support; (ii) the Nginx‑to‑server link must also stay alive, making Nginx act as a client to the upstream.

HTTP configuration : Nginx enables keep‑alive by default. For special scenarios you can adjust parameters such as:

http {
    keepalive_timeout 120s;      # client‑side timeout, 0 disables keep‑alive
    keepalive_requests 10000;   # max requests per long‑lived connection
}

Increasing keepalive_requests is useful for high QPS workloads to avoid frequent connection creation and the resulting TIME_WAIT sockets.

Upstream configuration : The keepalive directive inside an upstream block defines the maximum number of idle connections kept in the pool. Example:

http {
    upstream backend {
        server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;
        server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;
        keepalive 300;   # important: number of idle connections
    }
    server {
        listen 8080 default_server;
        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;               # ensure HTTP/1.1
            proxy_set_header Connection "";       # force long connection
        }
    }
}

The keepalive value should be chosen based on expected QPS and average response time; a common rule is to set it to 10‑30 % of the estimated number of concurrent long connections.

Location configuration : Use proxy_http_version 1.1 and set the Connection header (or clear it) so that Nginx forwards the request as a persistent connection.

Advanced method (WebSocket / Upgrade handling) :

http {
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    upstream backend {
        server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;
        server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;
        keepalive 300;
    }
    server {
        listen 8080 default_server;
        location / {
            proxy_pass http://backend;
            proxy_connect_timeout 15;
            proxy_read_timeout    60s;
            proxy_send_timeout    12s;
            proxy_http_version    1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
        }
    }
}

This map makes the Connection header follow the client’s Upgrade header, enabling Nginx to correctly proxy WebSocket upgrades.

Notes : Header directives inherit from higher levels (http → server → location). If a lower level sets proxy_set_header , it overrides inherited values, so keep related settings together to avoid unexpected changes.

References : Nginx official Chinese documentation, various performance tuning articles, and keep‑alive specific guides.

Performanceload balancingconfigurationHTTPnginxreverse-proxykeepalive
JD Tech
Written by

JD Tech

Official JD technology sharing platform. All the cutting‑edge JD tech, innovative insights, and open‑source solutions you’re looking for, all in one place.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.