Backend Development 11 min read

Understanding Nginx: Features, Proxy Types, Load‑Balancing Algorithms, Process Model, and Request Handling

This article provides a comprehensive overview of Nginx, covering its core functions as an HTTP server and reverse proxy, the differences between forward and reverse proxy, supported load‑balancing algorithms, the master‑worker process architecture, and the detailed steps of how Nginx processes a request.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Understanding Nginx: Features, Proxy Types, Load‑Balancing Algorithms, Process Model, and Request Handling

What is Nginx? Nginx is a versatile web server that can also act as a reverse proxy, forward proxy, and mail server, supporting FastCGI, SSL, virtual hosts, URL rewriting, Gzip, and many third‑party modules. Its key advantages are low memory usage and strong concurrency.

Common Functions

1. Proxy Types : Forward proxy forwards client requests to a specified server, requiring client configuration, while reverse proxy receives client requests and forwards them to backend servers transparently, enabling load balancing and internal network security.

2. Load Balancing : Nginx offers several scheduling algorithms:

Weight round‑robin (default) – distributes requests sequentially, with optional weight values to favor stronger servers.

ip_hash – binds a client IP to a specific backend, helping with session persistence.

fair – dynamically allocates requests based on response time (requires the upstream_fair module).

url_hash – hashes the request URL to a fixed backend, improving cache efficiency (requires the hash module).

Internal Process Model

Nginx primarily uses a multi‑process architecture consisting of a single master process and multiple worker processes. The master process loads configuration, manages workers, and handles signals, while each worker handles network events and client requests. Worker count is typically set to match CPU cores.

Key points of the model:

The master forks workers, each maintaining its own event loop and connection pool.

When a reload signal is received (e.g., ./nginx -s reload ), the master reloads the configuration, spawns new workers, and gracefully shuts down old ones after they finish processing current requests.

Workers compete for incoming connections; an accept_mutex prevents the “thundering herd” problem by allowing only one worker to accept a connection at a time.

The isolated process model ensures that a crash in one worker does not affect others, enhancing reliability.

How Nginx Handles a Request

1. At startup, Nginx parses nginx.conf , creates listening sockets, and forks worker processes.

2. A client establishes a TCP connection; one worker accepts it, creates an ngx_connection_t structure, and registers read/write event handlers.

3. The worker processes the HTTP request, interacts with upstream servers if needed (acting as a client), and uses a per‑worker connection pool to manage ngx_connection_t objects.

4. After the response is sent, the connection is closed and the ngx_connection_t is returned to the free list.

5. The maximum number of concurrent connections is determined by worker_connections * worker_processes , with considerations for both client‑side and upstream connections in proxy scenarios.

Overall, Nginx’s design—lightweight processes, flexible proxy capabilities, and sophisticated load‑balancing—makes it a powerful solution for high‑traffic web services.

Backend DevelopmentLoad Balancingnginxreverse-proxyprocess model
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.