Mastering Nginx Reverse Proxy, Load Balancing, and Caching
This article explains how to configure Nginx as a reverse proxy, implement load‑balancing strategies, separate static and dynamic content, set up proxy caching with various directives, purge caches, and enable gzip compression, providing complete code examples and practical testing results.
Reverse Proxy
Forward proxy sits between the client and the target server; the client requests a resource from the proxy, which then forwards the request to the target server and returns the response. Reverse proxy accepts Internet connections, forwards them to internal servers, and returns the internal server’s response to the client, appearing as a single public server.
Functions of Reverse Proxy
Protects internal networks; large sites expose the reverse proxy to the public while web servers remain behind the firewall.
Enables load balancing by distributing requests across multiple backend servers.
Reverse Proxy Example
Environment
<code>192.168.0.168 proxy server (nginx)
192.168.0.52 backend server (httpd)</code>Modify nginx proxy configuration
<code>location /wanger {
proxy_pass http://192.168.0.52;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}</code>Add backend index
<code>mkdir /var/www/html/wanger
echo 192.168.0.52 > /var/www/html/wanger/index.html</code>Test from the frontend server
When a client accesses
http://192.168.0.168/wanger/, the request is forwarded to
http://192.168.0.52/wanger/index.html.
<code>[[email protected]]# curl 127.0.0.1/wanger/
192.168.0.52</code>The reverse‑proxy test succeeds.
proxy_set_header Directive
The directive sets headers sent from nginx to the backend. In the example, the
Hostheader is set to
$host, and
X-Real-IPand
X-Forwarded-Forare set using
$remote_addrand
$proxy_add_x_forwarded_forrespectively.
proxy_add_x_forwarded_for Variable
This variable concatenates the
X-Forwarded-Forheader and the client IP, separated by commas. It is useful only when the request passes through more than one proxy.
Load Balancing
Load balancing distributes incoming requests across multiple backend nodes, improving system responsiveness and processing capacity.
Environment
<code>192.168.0.168 load‑balancer
192.168.0.52 upstream node 1
192.168.0.84 upstream node 2</code>Scheduling Strategies
Weight Round‑Robin (default)
<code>upstream read {
server 192.168.0.52 weight=2 max_fails=3 fail_timeout=20s;
server 192.168.0.84:8080 weight=1 max_fails=3 fail_timeout=20s;
server 192.168.0.96 down;
server 192.168.0.168 backup;
}</code>max_fails – maximum failures before a server is marked unavailable.
fail_timeout – time to wait before retrying a failed server.
down – marks a server as unavailable.
backup – used only when all non‑backup servers are unavailable.
ip_hash
Distributes requests based on the hash of the client IP, ensuring the same client consistently reaches the same backend, which helps with session persistence.
<code>upstream read {
ip_hash;
server 192.168.0.52;
server 192.168.0.84:8080;
}</code>least_conn
Routes the request to the backend with the fewest active connections.
<code>upstream read {
least_conn;
server 192.168.0.52;
server 192.168.0.84:8080;
}</code>fair
Chooses the backend with the shortest response time.
<code>upstream read {
fair;
server 192.168.0.52;
server 192.168.0.84:8080;
}</code>url_hash
Hashes the request URI so that each URL is consistently routed to the same backend, improving cache efficiency.
<code>upstream read {
hash $request_uri;
server 192.168.0.52;
server 192.168.0.84:8080;
}</code>Static‑Dynamic Separation
Why Separate
Nginx excels at serving static files but handles dynamic content less efficiently. Separating static and dynamic traffic allows caching of static resources and improves overall response speed.
Configuration
<code>upstream static {
server 192.168.0.52 weight=2 max_fails=3 fail_timeout=20s;
}
upstream dynamic {
server 192.168.0.168:9200 weight=2 max_fails=3 fail_timeout=20s;
}
server {
listen 80;
server_name localhost;
location ~* \.php$ {
fastcgi_pass http://dynamic;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
include fastcgi_params;
}
location ~* \.(jpg|gif|png|css|html|htm|js)$ {
proxy_pass http://static;
expires 12h;
}
}</code>Nginx Proxy Cache
The
ngx_http_proxy_moduleprovides built‑in caching. Key directives include
proxy_cache_path,
proxy_cache,
proxy_cache_key,
proxy_cache_valid,
proxy_no_cache,
proxy_cache_bypass, and the variable
$upstream_cache_status.
proxy_cache_path
Defines the cache storage location, levels, key zone, size limits, and inactivity timeout.
proxy_cache
Enables caching for a specific zone.
proxy_cache_key
Specifies the cache key; by default it combines scheme, host, and request URI.
proxy_cache_valid
Sets cache lifetimes for different response codes, e.g.:
<code>proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1m;</code>$upstream_cache_status
Shows cache status: MISS, HIT, EXPIRED, STALE, UPDATING, REVALIDATED, or BYPASS.
Cache Configuration Example
<code>proxy_cache_path /data/cache levels=1:2 keys_zone=cache:10m max_size=100m inactive=1m use_temp_path=off;
server {
listen 80;
server_name wanger.com;
location /wanger {
proxy_pass http://192.168.0.52;
proxy_cache cache;
proxy_cache_valid 200 301 1m;
add_header X-Cache $upstream_cache_status;
proxy_cache_key $host$uri;
}
}</code>First request misses the cache; subsequent requests hit the cache.
Cache Purge
Third‑party module
ngx_cache_purgeallows manual cache removal. Example location:
<code>location ~ /purge(/.*) {
proxy_cache_purge cache $host$1;
}
location /wanger {
proxy_pass http://192.168.0.52;
proxy_cache cache;
proxy_cache_valid 200 301 1m;
add_header X-Cache $upstream_cache_status;
proxy_cache_key $host$uri;
}</code>After one minute the cached entry expires and can be purged.
gzip Compression
Enabling gzip reduces the amount of data transferred between server and browser, improving client response time at the cost of additional CPU usage.
Key Directives
gzip on|off – enable or disable compression.
gzip_buffers – number and size of buffers.
gzip_comp_level – compression level (1‑9).
gzip_disable – regex to disable compression for certain browsers.
gzip_min_length – minimum response size to compress.
gzip_types – MIME types to compress.
gzip_vary – add
Vary: Accept-Encodingheader.
Configuration Example
<code>server {
listen 80;
server_name 192.168.0.168;
gzip on;
gzip_types image/jpeg;
gzip_buffers 32 4K;
gzip_min_length 100;
gzip_comp_level 6;
gzip_vary on;
}</code>Testing shows compressed image size of 75.9 KB with
Content‑Encoding: gzip; disabling gzip increases size to 76.4 KB.
Feel free to discuss and suggest improvements.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.