Comprehensive Nginx Tutorial: Reverse Proxy, Load Balancing, Static/Dynamic Separation, and High Availability with Keepalived
This article provides a detailed guide on using Nginx for high‑performance HTTP serving, reverse proxying, load balancing, static‑dynamic separation, installation commands, configuration file structure, practical examples with Tomcat back‑ends, and setting up high‑availability using Keepalived, complete with code snippets and diagrams.
Nginx is a high‑performance HTTP and reverse‑proxy server known for low memory usage and strong concurrency, capable of handling up to 50,000 simultaneous connections.
The article outlines the overall Nginx knowledge‑map architecture, illustrating its modular design.
Proxy concepts: Forward proxy requires client configuration to access external sites, while reverse proxy operates transparently, forwarding client requests to backend servers and masking their IP addresses.
Load balancing: As traffic grows, a single server becomes insufficient; the solution is to add multiple servers and distribute requests using Nginx’s load‑balancing features (round‑robin, weight, fair, ip_hash).
Static‑dynamic separation: To improve response speed, static files are served directly by Nginx, while dynamic content is processed by application servers such as Tomcat.
Installation and common commands:
./nginx -v ./nginx ./nginx -s stop ./nginx -s quit ./nginx -s reloadConfiguration file structure: The Nginx config consists of three main blocks:
Global block: Settings that affect the entire server, such as worker processes and connection limits.
Events block: Network connection handling parameters (e.g., worker_connections).
HTTP block: Contains directives for reverse proxy, load balancing, and location matching.
Location syntax example:
location [ = | ~ | ~* | ^~ ] /url/ { ... }= : exact match, stop further search.
~ : case‑sensitive regex.
~* : case‑insensitive regex.
^~ : highest‑priority prefix match.
Reverse‑proxy practical example: Mapping www.123.com to a Tomcat instance by configuring Nginx to listen on port 80 and proxy to localhost:8080 . Screenshots illustrate before/after configurations and the resulting page.
Another example shows two back‑ends ( 8080 for /edu/ and 8081 for /vod/ ) using regex locations to route requests based on URL path.
Load‑balancing implementation: Modify nginx.conf to define an upstream block with multiple servers, then reload Nginx. The article lists the four balancing methods: round‑robin (default), weight, fair, and ip_hash.
High availability with Keepalived: Install Keepalived, configure a virtual IP (e.g., 192.168.25.50 ) and VRRP scripts to monitor Nginx health. Commands to start the service and failover tests demonstrate that traffic continues when the primary node goes down.
Worker/Master model: Nginx runs a master process that manages multiple worker processes (typically one per CPU core). Workers handle requests independently, allowing hot reloads and fault isolation.
Overall, the guide equips readers with the knowledge to deploy, configure, and maintain a robust Nginx‑based web infrastructure, covering basic installation, advanced proxying, load balancing, static‑dynamic separation, and high‑availability strategies.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.