Master Nginx: Essential Configurations, Reverse Proxy, Load Balancing & More
Learn how to install, configure, and optimize Nginx for high‑performance web serving, covering core settings, reverse proxy, load balancing, caching, SSL, gzip compression, and advanced modules, with practical examples and step‑by‑step commands for real‑world deployment.
Preface
As a front‑end developer you may often be asked to modify Nginx configuration on the server. This guide helps you move from "I’m a front‑end, I don’t know Nginx" to a true programmer.
Nginx Overview
Nginx is an open‑source, high‑performance, high‑reliability web and reverse‑proxy server. It supports hot deployment, can run 24/7 for months without restart, and is free for commercial use.
Key Features
High concurrency, high performance
Modular architecture, easy to extend
Asynchronous, event‑driven model (similar to Node.js)
Long‑running without restart, high reliability
Hot deployment, smooth upgrades
Fully open source, thriving ecosystem
Typical Use Cases
Static file serving
Reverse proxy (including caching and load balancing)
API services (e.g., OpenResty)
For front‑end developers, Nginx and Node.js share many concepts (HTTP server, event‑driven, async). Nginx excels at low‑level resource handling, while Node.js focuses on business logic.
Installation (CentOS 7.x)
yum install nginx -yAfter installation, view files with:
# Nginx configuration files
/etc/nginx/nginx.conf
/etc/nginx/nginx.conf.default
/etc/nginx/conf.d/*.conf
/usr/sbin/nginx
/usr/bin/nginx-upgrade
/usr/lib/systemd/system/nginx.service
/usr/lib64/nginx/modules
/usr/share/doc/nginx-1.16.1/*
/usr/share/nginx/html/*
/var/log/nginx/*Two important directories:
/etc/nginx/conf.d/– sub‑configuration files
/usr/share/nginx/html/– static files
Common Commands
# Enable at boot
systemctl enable nginx
# Disable at boot
systemctl disable nginx
# Start Nginx
systemctl start nginx
# Stop Nginx
systemctl stop nginx
# Restart Nginx
systemctl restart nginx
# Reload configuration (no downtime)
systemctl reload nginx
# Check status
systemctl status nginx
# View processes
ps -ef | grep nginx
# Force kill
kill -9 <pid>
# Nginx command‑line options
nginx -s reload # reload config
nginx -s reopen # reopen logs
nginx -s stop # fast stop
nginx -s quit # graceful quit
nginx -T # show final config
nginx -t # test config syntaxCore Configuration
Configuration File Structure
# main block
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
use epoll;
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
deny 172.168.22.11;
allow 172.168.33.44;
}
error_page 500 502 503 504 /50x.html;
error_page 400 404 /error.html;
}
}Main Parameters
user– run worker processes as this user
pid– location of master PID file
worker_rlimit_nofile– max file descriptors per worker
worker_processes– number of worker processes (auto = CPU cores)
worker_cpu_affinity– bind workers to CPU cores
worker_priority– nice value for workers
daemon– run in background (off = foreground)
Events Parameters
use– event model (epoll, kqueue, etc.)
worker_connections– max connections per worker
accept_mutex– enable/disable accept mutex for load balancing
Server Name Directive
Exact name, left wildcard, right wildcard, or regex. Matching priority: exact > left wildcard > right wildcard > regex.
Root & Alias
rootappends the request URI to the path;
aliasreplaces the location prefix with the given path (must end with a slash).
Location Matching
Syntax:
location [=|~|~*|^~] uri { ... }. Priority:
=>
^~>
~>
~*> no modifier.
Return Directive
# Return status only
return 404;
# Return status with text
return 404 "pages not found";
# Redirect
return 302 /newpath;
# External redirect
return https://www.example.com;Rewrite Directive
rewrite ^/images/(.*\.jpg)$ /pic/$1 last;
# Flags: last, break, redirect (302), permanent (301)If Directive
if ($http_user_agent ~ Chrome) {
rewrite ^/(.*)$ /browser/$1 break;
}Autoindex
When a URI ends with
/, Nginx can list directory contents.
Variables
Common variables include
$remote_addr,
$server_name,
$request_uri,
$http_user_agent,
$request_time, etc. Example configuration prints many variables:
location /test {
return 200 "remote_addr: $remote_addr
...";
}Nginx Application Core Concepts
Forward Proxy
A forward proxy sits between the client and the origin server; the client tells the proxy which server to fetch, and the proxy forwards the request.
Reverse Proxy
A reverse proxy receives client requests, forwards them to backend servers, and returns the responses. It is transparent to the client but visible to the backend.
Static/Dynamic Separation
Serve static assets directly with Nginx, while dynamic requests are proxied to application servers. This improves performance and reliability.
Load Balancing
Distribute client requests across multiple backend servers to avoid overloading a single machine. Nginx supports round‑robin (default), least connections, IP hash, and more.
Round‑Robin (default)
upstream backend {
server 192.168.100.33:8081;
}Hash
upstream backend {
hash $request_uri;
server 192.168.100.33:8081;
server 192.168.100.34:8081;
}IP Hash
upstream backend {
ip_hash;
server 192.168.100.33:8081;
server 192.168.100.34:8081;
}Least Connections
upstream backend {
least_conn;
server 192.168.100.33:8081;
server 192.168.100.34:8081;
}Practical Configurations
Reverse Proxy Example
Assume two cloud servers: 121.42.11.34 (backend) and 121.5.180.193 (proxy).
Backend (121.42.11.34)
server {
listen 8080;
location /proxy/ {
root /usr/share/nginx/html/proxy;
index index.html;
}
}Proxy (121.5.180.193)
upstream back_end {
server 121.42.11.34:8080;
}
server {
listen 80;
server_name proxy.lion.club;
location /proxy {
proxy_pass http://back_end/proxy;
}
}Add
121.5.180.193 proxy.lion.clubto local
/etc/hostsand access
http://proxy.lion.club/proxy.
Load Balancing Example
# Backend servers on 121.42.11.34
server { listen 8020; location / { return 200 "return 8020
"; } }
server { listen 8030; location / { return 200 "return 8030
"; } }
server { listen 8040; location / { return 200 "return 8040
"; } }
# Proxy server (121.5.180.193)
upstream demo_server {
server 121.42.11.34:8020;
server 121.42.11.34:8030;
server 121.42.11.34:8040;
}
server {
listen 80;
server_name balance.lion.club;
location /balance/ { proxy_pass http://demo_server; }
}Requests to
http://balance.lion.club/balance/are distributed round‑robin among the three backends.
Caching
# Define cache zone
proxy_cache_path /etc/nginx/cache_temp levels=2:2 keys_zone=cache_zone:30m max_size=2g inactive=60m use_temp_path=off;
upstream cache_server {
server 121.42.11.34:1010;
server 121.42.11.34:1020;
}
server {
listen 80;
server_name cache.lion.club;
location / {
proxy_cache cache_zone;
proxy_cache_valid 200 5m;
proxy_cache_key $request_uri;
add_header Nginx-Cache-Status $upstream_cache_status;
proxy_pass http://cache_server;
}
}Cache files are stored under
/etc/nginx/cache_temp. The response header
Nginx-Cache-Statusshows
MISS,
HIT, etc.
HTTPS
server {
listen 443 ssl http2 default_server;
server_name lion.club;
ssl_certificate /etc/nginx/https/lion.club_bundle.crt;
ssl_certificate_key /etc/nginx/https/lion.club.key;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}CORS (Cross‑Origin Resource Sharing)
server {
listen 80;
server_name cors.lion.club;
location /api/ {
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET,POST,OPTIONS";
add_header Access-Control-Allow-Headers "Authorization,Content-Type";
if ($request_method = OPTIONS) { return 204; }
proxy_pass http://backend/api/;
}
}Gzip Compression
# Enable gzip
gzip on;
# Compress these MIME types
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# Use pre‑compressed .gz files if they exist
gzip_static on;
# Enable gzip for proxied responses
gzip_proxied any;
# Add Vary header
gzip_vary on;
# Compression level (1‑9)
gzip_comp_level 6;
# Buffer size
gzip_buffers 16 8k;
# Minimum length to compress
gzip_min_length 1024;
# Only for HTTP/1.1 and above
gzip_http_version 1.1;Architecture
Process Model
Nginx runs a master process that spawns multiple worker processes. Workers handle client connections; the master monitors workers, reloads configuration, and restarts failed workers.
Configuration Reload
Send HUP signal to master (e.g.,
nginx -s reload).
Master checks syntax.
Master opens new listening sockets.
Master starts new workers with the new configuration.
Master tells old workers to quit gracefully.
Old workers finish current requests and exit.
Modular Design
Nginx core plus a set of independent modules (http, stream, mail, etc.). Modules are loosely coupled, making it easy to add functionality without affecting the core.
Conclusion
This guide covered Nginx installation, core directives, reverse proxy, load balancing, caching, HTTPS, CORS, gzip, and its internal architecture. With these fundamentals you can confidently configure Nginx for production workloads and extend it as needed.
Architect's Guide
Dedicated to sharing programmer-architect skills—Java backend, system, microservice, and distributed architectures—to help you become a senior architect.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.