Understanding Nginx: History, Architecture, Proxy Types, and Load‑Balancing
This article explains the origins of Nginx, compares it with Apache, describes why Nginx gained popularity, details forward and reverse proxy concepts, and outlines the various load‑balancing algorithms it supports, providing a comprehensive overview for backend developers and operations engineers.
Nginx is a high‑performance, open‑source HTTP server and reverse‑proxy that was created by Russian engineer Igor Sysoev while working at Rambler Media, using the C language.
Like Apache, Nginx is a web server that follows the REST architectural style and uses URIs/URLs over HTTP to provide network services. However, Apache was designed in an era of limited bandwidth and hardware, leading to high memory consumption and poor handling of massive concurrent connections.
Because of these limitations, a lightweight, event‑driven server—Nginx—was born.
Why Nginx became popular:
Nginx uses an event‑driven architecture, allowing it to handle millions of TCP connections simultaneously.
Its modular design and free software license encourage a rich ecosystem of third‑party modules.
It runs on many platforms, including Linux, Windows, FreeBSD, Solaris, AIX, and macOS.
These design choices give Nginx excellent stability.
Nginx can act as a pure HTTP server for publishing websites, or as a reverse proxy to implement load balancing.
Proxy concepts
A proxy is an intermediary that forwards requests from a client to a target server. In a forward proxy, the client knows the proxy’s address and the proxy hides the client’s identity from the target server. In a reverse proxy, the proxy sits in front of one or more backend servers, hiding the servers’ identities from the client.
Typical forward‑proxy use cases include accessing blocked foreign sites (e.g., using an FQ proxy) and caching resources.
Typical reverse‑proxy use cases include protecting internal servers, distributing traffic across a server farm, and providing a single public entry point for a service.
Load balancing
When Nginx receives many client requests, it distributes them among backend servers according to a chosen scheduling algorithm. Common algorithms include:
weight (round‑robin) : requests are sent sequentially; each server can be assigned a weight to receive a larger share of traffic.
ip_hash : the client’s IP address is hashed, ensuring the same client always reaches the same backend server, which helps with session persistence.
fair : dynamically balances based on each server’s response time (requires the upstream_fair module).
url_hash : hashes the requested URL, directing the same URL to the same backend (requires the Nginx hash module).
Both hardware (e.g., F5) and software load balancers exist; software solutions are more cost‑effective for most companies.
Web server comparison
For more details about Nginx’s load‑balancing modules, see the official Tengine site:
http://tengine.taobao.org/
Overall, Nginx’s event‑driven architecture, modularity, cross‑platform support, and powerful proxy and load‑balancing capabilities make it a preferred choice for modern backend and operations environments.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.