Nginx Architecture Overview: Modular Design, Event‑Driven Model, Multi‑Stage Asynchronous Processing, Master/Worker Processes, and Memory Pool
This article explains Nginx's high‑performance architecture, covering its modular design, event‑driven processing, multi‑stage asynchronous request handling, master‑worker process model, and memory‑pool implementation, illustrating how these components together achieve scalability and low latency.
Recently I have been reading books about Nginx and wrote these notes.
Nginx is a widely used high‑performance web server whose efficiency stems from a well‑designed architecture consisting of modular design, an event‑driven model, multi‑stage asynchronous request processing, a master‑worker process layout, and a custom memory‑pool system.
Modular Design
Nginx’s core is minimal; almost everything else is implemented as a module. The official distribution defines five module types: core, configuration, event, HTTP, and mail modules. Core and configuration modules are tightly coupled with the framework, while the event module underpins the HTTP and mail modules, which operate at the application layer.
Event‑Driven Architecture
The event‑driven model works by having event sources (e.g., network cards, disks) generate events that are collected and dispatched by an event module. Modules register the event types they are interested in; when an event occurs, the dispatcher forwards it to the appropriate module.
Traditional web servers like Apache handle only connection‑establishment and teardown events, then process each request synchronously, tying up resources until completion. Nginx, by contrast, treats modules as event consumers, allowing the event dispatcher to invoke them only when needed, which reduces resource waste and improves throughput.
Multi‑Stage Asynchronous Request Processing
Based on the event‑driven core, Nginx splits request handling into several asynchronous stages. For a static file request, up to seven stages are defined (e.g., reading the request line, locating the file, sending the header, transmitting the body). Each stage is triggered by specific events, allowing the server to pause and resume processing as events arrive.
Master and Worker Process Design
When Nginx starts, it creates one master process and multiple worker processes. The master process manages workers (signals, monitoring, restarts) while workers handle client events. Workers are equal peers; each request is processed by a single worker, and the number of workers is typically set to the number of CPU cores.
The design brings several benefits:
Utilizes multi‑core CPUs for concurrent processing.
Implements load balancing among workers via inter‑process communication.
Allows the master to monitor and restart workers, supporting hot upgrades and configuration reloads without downtime.
Memory‑Pool Design
To avoid memory fragmentation and reduce system calls, Nginx uses a simple memory‑pool mechanism. Each request (or TCP connection) gets its own pool, which is allocated once and released in bulk when the request finishes, lowering CPU overhead and improving memory utilization.
This approach also simplifies module development, as modules do not need to manage individual allocations.
end
Architecture Digest
Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.