Operations 11 min read

Master Nginx: Reverse Proxy, Load Balancing, and High‑Availability Essentials

This guide explains Nginx’s core concepts—including reverse proxy, load balancing, static‑dynamic separation, common commands, configuration blocks, and high‑availability setup with Keepalived—providing step‑by‑step examples and practical diagrams for reliable web service deployment.

Efficient Ops
Efficient Ops
Efficient Ops
Master Nginx: Reverse Proxy, Load Balancing, and High‑Availability Essentials

Nginx Overview

Nginx is a high‑performance HTTP and reverse‑proxy server noted for low memory consumption and strong concurrency, capable of supporting up to 50,000 simultaneous connections.

Knowledge Map

The following diagram illustrates the overall Nginx knowledge structure.

Reverse Proxy

Forward proxy : In a LAN, direct Internet access is impossible; users must go through a proxy server, which is called a forward proxy.

Reverse proxy: Clients are unaware of the proxy because no client‑side configuration is needed. Requests are sent to the reverse‑proxy server, which selects a target server, fetches the data, and returns it to the client, hiding the real server IP.

Load Balancing

When request volume grows, a single server cannot meet demand. Adding more servers and distributing requests among them constitutes load balancing.

Typical request‑response flow:

Load‑balancing diagram:

Static‑Dynamic Separation

Separating static and dynamic content onto different servers speeds up page rendering and reduces load on the application server.

Before separation:

After separation:

Installation

Reference link: https://blog.csdn.net/yujing1314/article/details/97267369

Nginx Common Commands

<code>./nginx -v</code>
<code>./nginx</code>
<code>./nginx -s stop
./nginx -s quit</code>
<code>./nginx -s reload</code>

Configuration File Structure

The configuration file consists of three main blocks:

Global block – settings that affect the entire Nginx process.

Events block – network‑connection parameters such as worker connections.

HTTP block – where reverse proxy, load balancing, and other web‑related directives reside.

Example of a location directive:

<code>location [ = | ~ | ~* | ^~ ] url { }</code>

= – exact match, stop searching.

~ – case‑sensitive regex.

~* – case‑insensitive regex.

^~ – highest‑priority prefix match.

Reverse Proxy Practical Example

1. Configure reverse proxy to map

www.123.com

to a local Tomcat on port 8080.

2. After configuring Tomcat, the request flow is illustrated below:

Another example redirects

http://192.168.25.132:9001/edu/

to

192.168.25.132:8080

and

/vod/

to

8081

using regex‑based location rules.

Load Balancing Practical Example

Modify

nginx.conf

to define an upstream group and enable round‑robin distribution, then reload Nginx:

<code>./nginx -s reload</code>

Requests are distributed according to the chosen method (round‑robin, weight, fair, ip_hash).

Static‑Dynamic Separation Practical Example

Configure Nginx to serve static files directly while proxying dynamic requests to Tomcat. The architecture diagram is shown below:

High Availability with Keepalived

Deploy two Nginx instances with Keepalived to provide a virtual IP. Install Keepalived:

<code>[root@192 usr]# yum install keepalived -y
[root@192 usr]# rpm -q -a keepalived
keepalived-1.3.5-16.el7.x86_64</code>

Edit

/etc/keepalived/keepalived.conf

to define a virtual IP (e.g., 192.168.25.50) and set MASTER/BACKUP states.

<code>global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.25.147
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_script chk_nginx {
  script "/usr/local/src/nginx_check.sh"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.25.50
    }
}</code>

Start the service:

<code>[root@192 sbin]# systemctl start keepalived.service</code>

After failover, the virtual IP remains reachable.

Worker / Master Architecture

Nginx runs a master process that manages one or more worker processes; each worker handles client connections independently, so a worker crash does not affect others. The number of workers should match the CPU core count.

Summary

Understanding Nginx’s reverse‑proxy, load‑balancing, static‑dynamic separation, and high‑availability mechanisms enables you to build scalable, resilient web services. Proper configuration of global, events, and HTTP blocks, along with Keepalived for failover, ensures continuous availability even when individual nodes fail.

operationsHigh AvailabilityLoad BalancingconfigurationNginxreverse proxyKeepalived
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.