Backend Development 14 min read

The Origin of Nginx and Its Applications in Proxy and Load Balancing

This article traces Nginx’s creation by Igor Sysoev, explains its advantages over Apache, and details how Nginx functions as a forward and reverse proxy, supports various load‑balancing algorithms, and is widely used in modern web infrastructure.

Architecture Digest
Architecture Digest
Architecture Digest
The Origin of Nginx and Its Applications in Proxy and Load Balancing

The Origin of Nginx

If you have never heard of Nginx, you have almost certainly heard of its "peer" Apache. Both are web servers that follow the REST architectural style, using URIs/URLs as identifiers and providing services over HTTP.

Early web servers were constrained by the hardware, network bandwidth, and product requirements of their time, leading to distinct design goals. Apache, the long‑standing world‑leading server, is stable, open‑source, and cross‑platform, but it was designed as a heavyweight server and does not handle high concurrency well; many processes/threads consume large amounts of memory and CPU, slowing response times.

These limitations motivated the creation of a lightweight, high‑concurrency server—Nginx.

Russian engineer Igor Sysoev developed Nginx in C while working for Rambler Media, where it provided stable service for the company. He later released the source code under a permissive free‑software license.

Key reasons for Nginx’s popularity:

Nginx uses an event‑driven architecture, allowing it to handle millions of TCP connections.

Its modular design and open‑source license foster a rich ecosystem of third‑party modules.

It runs on many platforms, including Linux, Windows, FreeBSD, Solaris, AIX, and macOS.

These design choices yield excellent stability.

Nginx’s Use Cases

Nginx is a free, open‑source, high‑performance HTTP server and reverse‑proxy server; it also supports IMAP, POP3, and SMTP proxying. It can serve static websites, act as a reverse proxy for load balancing, and more.

About Proxy

A proxy acts as an intermediary or channel between a client and a target server.

Two roles exist: the proxied entity (the original server) and the target entity (the client). The proxy forwards requests from the client to the target, completing the operation on behalf of the client—much like a retail store representing a brand.

Forward Proxy

Forward proxies are the most common proxy type. They are used when a client needs to access resources that are otherwise unreachable, such as foreign websites blocked by a firewall. The client sends a request to the proxy, which then fetches the resource and returns it to the client.

Characteristics of forward proxy:

The client explicitly specifies the destination server address.

The server only sees the proxy’s IP, not the original client’s IP, thereby masking client identity.

Typical usage includes bypassing geographic restrictions, caching to accelerate access, authentication, and logging user activity.

Clients must configure the forward‑proxy’s IP address and port before use.

Reverse Proxy

Reverse proxies sit in front of a pool of backend servers. For example, large e‑commerce sites like Taobao receive massive traffic that a single server cannot handle; Nginx is used as a reverse proxy to distribute requests across many backend servers. Taobao’s customized version is called Tengine:

http://tengine.taobao.org/

The following diagram shows how multiple client requests are received by Nginx and then forwarded to backend business servers, while the client remains unaware of the actual server handling the request.

Clients do not need any special configuration; the reverse proxy operates transparently.

Project Scenarios

In practice, forward and reverse proxies often coexist: a forward proxy forwards client requests to a reverse‑proxy server, which then distributes them among multiple backend servers. The topology is illustrated below.

Differences Between the Two

The diagram below highlights the key differences: in a forward proxy, the proxy and client reside in the same LAN, hiding client information; in a reverse proxy, the proxy and server share a LAN, hiding server information. Functionally, both proxies forward requests and responses, but their positions are swapped.

Load Balancing

When Nginx acts as a reverse proxy, it distributes incoming requests according to configurable rules, known as load‑balancing algorithms. Load refers to the number of client requests received, and balancing distributes these requests across multiple backend servers.

Load balancing can be implemented in hardware (e.g., F5) or software. Hardware solutions are expensive but offer high reliability; many companies opt for software load balancers that run on existing servers.

Nginx supports the following scheduling algorithms:

Weight round‑robin (default): Requests are distributed based on a weight assigned to each backend server; failed servers are automatically removed from the pool.

ip_hash: The client’s IP address is hashed, ensuring the same client is consistently routed to the same backend, which helps with session persistence.

fair: Dynamically adjusts distribution based on each server’s response time; faster servers receive more requests. Requires the upstream_fair module.

url_hash: Requests are hashed based on the URL, directing the same URL to a specific backend, improving cache efficiency. Requires additional Nginx modules.

Recommended Reading

What Is a Graph Database and Its Application Scenarios?

Real‑World Elasticsearch Cases in Major Internet Companies

RabbitMQ vs. Kafka: Which to Choose?

Permission Design in Front‑End/Back‑End Separation

proxybackend-developmentLoad BalancingnginxReverse ProxyWeb Server
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.