Backend Development 13 min read

Master Nginx: Reverse Proxy, Load Balancing, and High‑Availability Made Simple

This guide walks you through Nginx's high‑performance architecture, explains forward and reverse proxy concepts, demonstrates load‑balancing and static‑dynamic separation techniques, shows step‑by‑step installation and common commands, and details a high‑availability setup with Keepalived, all illustrated with practical examples and diagrams.

Open Source Linux
Open Source Linux
Open Source Linux
Master Nginx: Reverse Proxy, Load Balancing, and High‑Availability Made Simple

What Is Nginx?

Nginx is a high‑performance HTTP and reverse‑proxy server known for low memory usage and strong concurrency; reports show it can handle up to 50,000 concurrent connections.

Nginx Knowledge Map

01 Reverse Proxy

Forward Proxy

In a LAN, a client cannot directly access the Internet and must go through a forward proxy server.

Reverse Proxy

The client is unaware of the proxy; it sends requests to the reverse‑proxy server, which selects a target server, fetches the data, and returns it to the client.

Thus the external address is the proxy’s IP, hiding the real server IP.

02 Load Balancing

When request volume and data grow, a single server cannot meet demand. Adding more servers and distributing requests across them is called load balancing.

Typical request flow: client → Nginx → (optional) database → response.

Example: 15 requests are evenly split among three servers, each handling 5 requests.

03 Static‑Dynamic Separation

To speed up page parsing, static pages are served by one server and dynamic pages by another, reducing load on a single machine.

After separation, static files are handled independently from dynamic content.

04 Nginx Installation on Linux

Download and extract Nginx, then compile and install. (Specific steps omitted for brevity.)

Common Nginx Commands

./nginx -v
./nginx
./nginx -s stop
./nginx -s quit
./nginx -s reload

Configuration File Structure

The main configuration consists of three blocks:

① Global Block

Settings that affect the whole Nginx server (e.g., worker_processes, events).

② Events Block

Controls network connection handling, such as connection serialization and maximum connections.

③ HTTP Block

Contains directives for reverse proxy, load balancing, etc.

location [ = | ~ | ~* | ^~ ] url { ... }

= : exact match, stop searching.

~ : regex match, case‑sensitive.

~* : regex match, case‑insensitive.

^~ : highest‑priority prefix match, stop further regex checks.

05 Reverse Proxy Practical Example

Goal: Access

www.123.com

and have it forward to a Tomcat server on Linux.

Configure two Tomcat instances (ports 8080 and 8081). Nginx listens on port 80 and proxies to the appropriate Tomcat based on URL path.

After configuration, accessing

www.123.com

resolves to the server IP via the host file and is proxied to

localhost:8080

.

Another example routes

http://192.168.25.132:9001/edu/

to

192.168.25.132:8080

and

/vod/

to

192.168.25.132:8081

using regex location blocks.

Target 1:

http://192.168.25.132:9001/edu/ → 192.168.25.132:8080

Target 2:

http://192.168.25.132:9001/vod/ → 192.168.25.132:8081

06 Load Balancing Practical Example

Modify

nginx.conf

to define an upstream group and use the

proxy_pass

directive.

./nginx -s reload

Load‑balancing methods include round‑robin (default), weight, fair, and ip_hash.

Round‑robin: distribute requests evenly.

Weight: higher weight gets more requests.

Fair: based on backend response time.

ip_hash: same client IP always goes to the same backend (helps with session persistence).

07 Static‑Dynamic Separation Practical Example

Configure Nginx to serve static files directly while forwarding dynamic requests to Tomcat.

After setup, static assets are delivered by Nginx, reducing load on the application server.

08 High‑Availability with Keepalived

Deploy two Nginx nodes and install Keepalived to provide a virtual IP. The master node runs Keepalived in MASTER state; the backup runs in BACKUP state. If the master fails, the virtual IP moves to the backup.

# yum install keepalived -y
# systemctl start keepalived.service

Sample

keepalived.conf

(virtual IP 192.168.25.50):

global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.25.147
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/usr/local/src/nginx_check.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.25.50
}
}

After starting Keepalived, the virtual IP can be accessed even if the primary Nginx node goes down.

09 Principle Analysis

Nginx runs a master process that manages multiple worker processes (usually one per CPU core). Workers handle client connections independently; if one worker crashes, others continue serving.

Workers accept connections, parse requests, and forward them to upstream servers or serve static content.

Conclusion

For optimal performance, set the number of workers equal to the CPU count; multiple workers enable hot reloads, and the master‑worker architecture ensures high reliability.

high availabilityLoad BalancingLinuxnginxreverse-proxyweb server
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.