How Nginx Keepalive Works: Experiments on HTTP and TCP Timeouts
This article experimentally explores Nginx's keepalive settings—HTTP keepalive_timeout, proxy_read_timeout, and TCP so_keepalive—showing how different browsers, upstream delays, and TCP probes affect connection lifetimes and what practical steps administrators can take to troubleshoot unexpected disconnects.
For IT professionals, keepalive is a familiar network concept that means maintaining a connection, relevant to operating systems, web servers, load balancers, firewalls, and more.
Correctly managing connection state—when to keep it alive and when to close it—is essential for administrators. Keepalive includes HTTP Keepalive and TCP Keepalive, both crucial for connection persistence and timeout handling.
This article uses experiments to demonstrate Nginx keepalive-related parameters in various scenarios, helping readers gain a deeper understanding and learn how to handle abnormal disconnections.
Scenario 1: Nginx HTTP Keepalive
The keepalive_timeout parameter defines the maximum idle time after the last HTTP response before the server actively closes the connection. If no new request arrives within this period, Nginx closes the HTTP connection.
Experiment setup : Nginx serves static files with the default configuration, where
keepalive_timeout 65;(seconds) is set.
<code>http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
server {
listen 8000;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
}
}</code>Using IE to access the URL, the page loads successfully. After waiting less than 65 seconds, the page is refreshed again. A tcpdump capture of port 8000 shows two HTTP requests at 11:13:02 and 11:13:17, followed by Nginx sending a FIN packet at 11:14:22 after the 65‑second timeout, closing the connection.
When the browser is changed to Firefox, the capture differs: Firefox sends TCP Keep‑Alive probe packets to the server. After the keepalive_timeout expires, Nginx still closes the TCP connection, demonstrating that Nginx does not process TCP Keep‑Alive probes; the operating system handles them.
Scenario 2: Abnormal Closure – Client Cannot Receive Response
When Nginx acts as a proxy and the upstream server responds slowly, the client may not receive a response. In this case, Nginx's
proxy_read_timeoutcan expire before the upstream finishes, causing Nginx to close the upstream connection and the client receives a 504 Gateway Timeout.
Experiment setup : Browser → Nginx (proxy) → upstream service that sleeps 60 seconds before responding.
proxy_read_timeoutis set to 30 seconds, while
keepalive_timeoutremains at the default 65 seconds.
<code>upstream WLS {
server 10.10.10.106:17101;
}
server {
server_name testWLS;
listen 9000;
error_log logs/testWLS_debug.log;
location /testWLS {
proxy_pass http://WLS;
proxy_http_version 1.1;
proxy_method POST;
proxy_read_timeout 30s;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}</code>The tcpdump of the Nginx‑upstream link shows no response within 30 seconds; Nginx then sends a FIN packet, closing the connection. After another 30 seconds the upstream finally returns data, but the connection is already closed, so the client never receives the response.
Subsequent client‑Nginx traffic shows that after the 30‑second timeout Nginx returns a 504 Gateway Timeout, but the TCP connection remains open until the keepalive_timeout (65 seconds) finally expires, during which the browser continues sending TCP Keep‑Alive probes.
Scenario 3: Nginx TCP Keepalive
The
so_keepalivedirective enables TCP keepalive on the listening socket, overriding the system's TCP keepalive settings (tcp_keepalive_time, tcp_keepalive_intvl, tcp_keepalive_probes).
Experiment setup : Browser → Nginx (proxy) → upstream Nginx with
so_keepalive=15s:20s:. The proxy’s
keepalive_timeoutis set to 60 seconds.
<code>upstream rrups {
server 1.1.1.106:8011;
keepalive_timeout 60;
}
server {
server_name rrups.test;
listen 8000;
#error_log logs/rrups.log debug;
location / {
proxy_pass http://rrups;
proxy_http_version 1.1;
proxy_method POST;
proxy_set_header Connection "";
}
}</code> <code>server {
listen 8011 so_keepalive=15s:20s:;
default_type text/plain;
return 200 "106 server port 8011 response!";
}</code>Tcpdump of the proxy‑upstream link shows the upstream sending a TCP keepalive probe after 15 seconds of inactivity, then every 20 seconds. The proxy’s keepalive_timeout of 60 seconds is not reset by these probes; the connection is closed by the proxy after exactly 60 seconds, even though keepalive probes continue.
This confirms that TCP Keep‑Alive probes do not reset Nginx’s
keepalive_timeout.
Reflection
HTTP keepalive_timeout applies to layer‑7 traffic and does not handle layer‑4 TCP probes. In real production environments, intermediate devices such as LVS or firewalls may have their own idle‑connection timeouts, which can cause unexpected disconnects when upstream response times are long.
These devices can process TCP probes, so configuring Nginx’s
so_keepaliveor adjusting the OS TCP keepalive parameters can keep the connection alive on the device side until the application finishes.
Conclusion
Keepalive parameters have different contexts and usage scenarios. Setting them only guarantees that the side configuring them will not close the connection within the specified conditions; upstream components may still close it. When connections drop unexpectedly, check logs, review timeout settings across the request chain, capture tcpdump to identify which side closed the connection, and adjust parameters or architecture accordingly.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.