Operations 11 min read

Mastering Nginx: Reverse Proxy, Load Balancing, and High‑Availability Setup

This guide explains Nginx’s core concepts—including forward and reverse proxy, load‑balancing strategies, static‑dynamic separation, installation commands, configuration file anatomy, practical reverse‑proxy and load‑balancing examples, and a Keepalived high‑availability solution—providing step‑by‑step instructions and essential code snippets for reliable backend deployment.

Liangxu Linux
Liangxu Linux
Liangxu Linux
Mastering Nginx: Reverse Proxy, Load Balancing, and High‑Availability Setup

Introduction

Nginx is a high‑performance HTTP and reverse‑proxy server known for low memory usage and strong concurrency, capable of handling up to 50,000 simultaneous connections.

Proxy Types

Forward proxy: Clients inside a LAN must access external resources through a proxy server because direct access is not possible.

Reverse proxy: Clients send requests to a proxy without configuring anything; the proxy forwards the request to the appropriate backend server, hiding the real server IP.

Load Balancing

When request volume grows, a single server becomes insufficient. Adding more servers and distributing requests among them constitutes load balancing.

Typical flow: 15 requests arrive at the proxy, which distributes them evenly across three backend servers (5 requests each).

Round‑robin (default)

Weight‑based (higher weight = higher priority)

Fair (based on response time)

IP‑hash (consistent client‑to‑server mapping)

Static/Dynamic Separation

Separating static and dynamic content improves parsing speed and reduces load on a single server. Static files can be served directly by Nginx, while dynamic requests are passed to an application server such as Tomcat.

Installation & Common Commands

./nginx -v

Check version. ./nginx Start Nginx. ./nginx -s stop Stop (alternative). ./nginx -s quit Graceful shutdown. ./nginx -s reload Reload configuration.

Configuration File Structure

The Nginx configuration consists of three main blocks:

Global block : Settings affecting the whole server, placed before the events block.

Events block : Controls network connection handling (e.g., multi‑worker connection serialization).

HTTP block : Contains directives for reverse proxy, load balancing, and other HTTP‑related settings.

Location Directive

Syntax:

location [ = | ~ | ~* | ^~ ] /url/ { … }
=

: Exact match, stop searching. ~: Regex match, case‑sensitive. ~*: Regex match, case‑insensitive. ^~: Prefer this prefix match over regex.

Reverse Proxy Practical Example

Goal: Access www.123.com in a browser and have it forward to a Tomcat instance on port 8080.

Steps:

Configure Tomcat (port 8080) and ensure it is reachable.

Add the following server block to nginx.conf (simplified):

server {
    listen 80;
    server_name www.123.com;
    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

After reloading Nginx, requests to www.123.com are proxied to Tomcat.

Extended example maps two URL prefixes to different Tomcat instances:

server {
    listen 9001;
    location /edu/ {
        proxy_pass http://127.0.0.1:8080;
    }
    location /vod/ {
        proxy_pass http://127.0.0.1:8081;
    }
}

Visiting http://192.168.25.132:9001/edu/ reaches Tomcat 8080, while /vod/ reaches Tomcat 8081.

Load Balancing Practical Example

Modify nginx.conf to define an upstream pool and use it in a server block:

upstream backend {
    server 192.168.25.101;
    server 192.168.25.102;
    server 192.168.25.103;
}

server {
    listen 80;
    location / {
        proxy_pass http://backend;
    }
}

Reload Nginx to apply the changes. Requests are now distributed among the three backend servers using the default round‑robin method.

High Availability with Keepalived

Deploy two Nginx nodes and install Keepalived to provide a virtual IP (VIP) that floats between them.

# yum install keepalived -y
# systemctl start keepalived.service

Sample keepalived.conf (simplified):

global_defs {
    notification_email { [email protected] }
    smtp_server 192.168.25.147
    router_id LVS_DEVEL
}

vrrp_script chk_nginx {
    script "/usr/local/src/nginx_check.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress { 192.168.25.50 }
    track_script { chk_nginx }
}

After starting Keepalived on both nodes, the VIP 192.168.25.50 can be used as the public address. If the master node fails, the backup automatically takes over, ensuring uninterrupted service.

Conclusion

Nginx’s modular architecture—master‑worker processes, flexible location matching, and built‑in reverse‑proxy and load‑balancing capabilities—makes it suitable for both backend service delivery and operational reliability. Combining it with Keepalived provides a robust high‑availability solution for production environments.

backendhigh availabilityConfigurationNginxKeepalivedload-balancingreverse-proxy
Liangxu Linux
Written by

Liangxu Linux

Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.