Mastering Nginx: Reverse Proxy, Load Balancing, and High Availability Guide
This comprehensive guide explains Nginx's core concepts—including reverse proxy, load balancing, static‑dynamic separation, common commands, configuration blocks, and high‑availability setup with Keepalived—providing practical examples, diagrams, and code snippets for reliable server deployment.
Nginx Overview
Nginx is a high‑performance HTTP and reverse‑proxy server known for low memory usage and strong concurrency, capable of handling up to 50,000 simultaneous connections.
Reverse Proxy
Forward proxy : Clients inside a LAN must use a proxy server to access the Internet.
Reverse proxy: Clients send requests to the proxy without any configuration; the proxy forwards them to target servers and returns the responses, hiding the real server IP.
Load Balancing
When request volume grows, a single server cannot meet demand; adding more servers and distributing requests achieves load balancing.
Example: 15 requests are evenly split among three servers, each handling five requests.
Static‑Dynamic Separation
Separating static and dynamic content speeds up page rendering and reduces server load.
Before separation:
After separation:
Installation Reference
https://blog.csdn.net/yujing1314/article/details/97267369Nginx Common Commands
./nginx -v ./nginx ./nginx -s stop
./nginx -s quit ./nginx -s reloadConfiguration File Structure
The configuration file consists of three main blocks:
Global block : Settings that affect the entire Nginx process, including worker processes and connection limits.
Events block : Controls network connection handling, such as connection serialization and maximum connections.
HTTP block : Contains directives for reverse proxy, load balancing, etc.
Location directive syntax:
location [ = | ~ | ~* | ^~ ] url { }= : exact match, stop searching.
~ : case‑sensitive regex.
~* : case‑insensitive regex.
^~ : prefix match with highest priority, stop further search.
Reverse Proxy Practical Example
1. Configure Nginx to forward
www.123.comto a local Tomcat on port 8080.
2. Example configuration (images omitted for brevity).
3. Additional scenario: map
http://192.168.25.132:9001/edu/to
192.168.25.132:8080and
/vod/to
192.168.25.132:8081using regex location blocks.
Load Balancing Practical Example
Modify
nginx.confto define an upstream group and enable round‑robin distribution, then reload Nginx.
./nginx -s reloadLoad‑balancing methods:
Round‑robin (default).
Weight – higher weight gets more requests.
Fair – based on backend response time.
ip_hash – same client IP always reaches the same backend (useful for session persistence).
Static‑Dynamic Separation Practical Setup
Configure Nginx to serve static files directly while proxying dynamic requests to Tomcat.
Key steps: place static assets in a separate directory, add
location /static/ { root /path/to/static; }, and proxy other requests to the application server.
High Availability with Keepalived
Install Keepalived on two Nginx nodes and configure a virtual IP for failover.
[root@192 usr]# yum install keepalived -y
[root@192 usr]# rpm -q -a keepalived
keepalived-1.3.5-16.el7.x86_64Sample
keepalived.conf(excerpt):
global_defs {
notification_email { [email protected] [email protected] [email protected] }
notification_email_from [email protected]
smtp_server 192.168.25.147
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/usr/local/src/nginx_check.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { 192.168.25.50 }
}Start the service:
systemctl start keepalived.serviceWhen the primary node fails, the backup takes over the virtual IP, ensuring continuous service.
Conclusion
Typical Nginx deployment uses one master process and multiple worker processes; the number of workers should match CPU cores. Workers operate independently, so a failure in one does not affect the others, enabling hot deployment and high reliability.
Open Source Linux
Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.