Backend Development 11 min read

Nginx Overview: Architecture, Reverse Proxy, Load Balancing, Static/Dynamic Separation, and High Availability

This article provides a comprehensive guide to Nginx, covering its high‑performance architecture, reverse‑proxy and load‑balancing concepts, static‑dynamic separation, common commands, configuration file structure, practical deployment examples, and high‑availability setup using Keepalived.

Top Architect
Top Architect
Top Architect
Nginx Overview: Architecture, Reverse Proxy, Load Balancing, Static/Dynamic Separation, and High Availability

Nginx Overview

Nginx is a high‑performance HTTP and reverse‑proxy server known for low memory usage and strong concurrency capabilities.

Nginx was designed with performance as the primary goal and can support up to 50,000 concurrent connections.

Reverse Proxy

Forward Proxy: In a LAN, direct Internet access is impossible, so a forward proxy server is used.

Reverse Proxy: Clients are unaware of the proxy; they send requests to the reverse‑proxy server, which forwards them to the appropriate backend server and returns the response, hiding the real server IP.

Load Balancing

Clients send multiple requests to servers; servers may interact with databases and then return results.

When traffic grows, a single server cannot meet demand, so multiple servers are added to form a cluster and distribute requests—this is load balancing.

Static/Dynamic Separation

To accelerate website parsing, static pages are served by a dedicated server while dynamic pages are handled by another, reducing load on a single server.

Installation

Reference link:

https://blog.csdn.net/yujing1314/article/details/97267369

Common Nginx Commands

Check version:

./nginx -v

Start:

./nginx

Stop (recommended):

./nginx -s stop
./nginx -s quit

Reload configuration:

./nginx -s reload

Nginx Configuration File

The configuration file consists of three blocks:

Global block: Settings that affect the whole Nginx process, including worker processes and connection limits.

Events block: Controls network connection handling, such as connection serialization and maximum connections.

HTTP block: Contains directives for reverse proxy, load balancing, etc.

Example of a location directive:

location [ = | ~ | ~* | ^~ ] url {
    
}

= : exact match without regex, stops further search.

~ : case‑sensitive regex match.

~* : case‑insensitive regex match.

^~ : highest‑priority prefix match, stops further search.

Reverse Proxy Practice

Goal: Access www.123.com in the browser and have it forward to a Tomcat page on a Linux server.

Implementation steps include configuring Tomcat, adding host entries, and setting up Nginx proxy rules (images omitted for brevity).

Load Balancing Practice

Modify nginx.conf to define upstream servers and load‑balancing method, then reload Nginx.

./nginx -s reload

Load‑balancing methods supported: round‑robin (default), weight, fair (based on response time), ip_hash (session affinity).

Static/Dynamic Separation Practice

Separate static files onto a dedicated server or domain, while dynamic requests are handled by Tomcat; Nginx serves static content and proxies dynamic requests.

High Availability with Keepalived

Deploy two Nginx servers, install Keepalived, configure a virtual IP, and set up health‑check scripts.

[root@192 usr]# yum install keepalived -y
[root@192 usr]# rpm -q -a keepalived
keepalived-1.3.5-16.el7.x86_64

Sample Keepalived configuration (excerpt):

global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 192.168.25.147
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_script chk_nginx {
  script "/usr/local/src/nginx_check.sh"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.25.50
    }
}

Start Keepalived:

[root@192 sbin]# systemctl start keepalived.service

After one node fails, the virtual IP remains reachable, demonstrating failover.

Conclusion

Workers should match the number of CPU cores; a master can have multiple workers for hot deployment, and a failure of one worker does not affect others.

Operationsbackend developmenthigh availabilityLoad BalancingNginxReverse Proxy
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.