Understanding Nginx: Process Model, Event Handling, and High‑Performance Architecture
This article explains Nginx’s multi‑process architecture, event‑driven model, module system, HTTP connection workflow, and I/O multiplexing mechanisms such as select, poll, and epoll, illustrating why the server achieves high performance and scalability.
Nginx is renowned for its high performance, stability, rich feature set, simple configuration, and low resource consumption. This article analyzes the underlying principles that make Nginx so fast.
Nginx Process Model
Multi‑process: one master process and multiple worker processes.
Master process: manages worker processes, receives external signals, forwards commands to workers, and monitors worker status, automatically restarting any worker that crashes.
Worker processes: all workers are equal and handle network requests. The number of workers is configured in worker_processes (usually set to the number of CPU cores) to fully utilize CPU resources while avoiding excessive context‑switch overhead.
HTTP Connection Establishment and Request Handling
When Nginx starts, the master process loads the configuration file.
The master process initializes listening sockets.
The master process forks multiple worker processes.
Worker processes compete for new connections; the winning worker completes the three‑way handshake, establishes a socket, and processes the request.
Why Nginx Achieves High Performance and Concurrency
Uses a multi‑process, asynchronous, non‑blocking I/O model (I/O multiplexing with epoll).
The request lifecycle: connection establishment → request read → request parsing → request processing → response generation.
All of these steps map to low‑level socket read/write events.
Nginx Event Processing Model
Requests are handled by reading request lines and headers, optionally reading the body, processing the request, and then generating an HTTP response (status line, headers, body).
Modular Architecture
Event module : provides the core event framework (e.g., ngx_events_module , ngx_event_core_module , ngx_epoll_module ).
Phase handler : handles client requests and produces response content (e.g., ngx_http_static_module for static files).
Output filter : modifies the output content, such as adding footers to HTML pages or rewriting image URLs.
Upstream module : implements reverse‑proxy functionality, forwarding requests to backend servers and returning their responses.
Load‑balancer module : selects a backend server based on configured algorithms.
Common Questions
Nginx vs. Apache
Nginx : I/O multiplexing (epoll/kqueue), high performance, high concurrency, low resource usage.
Apache : blocking I/O, multi‑process/multi‑threaded, more stable, richer module ecosystem.
Maximum Connections
Each worker process can handle up to worker_connections connections, limited by the OS file‑descriptor limit ( ulimit -n ).
Total maximum connections = worker_processes × worker_connections .
When Nginx acts as a reverse proxy, the effective maximum is halved because each client connection also opens a connection to the backend.
HTTP Request and Response Structure
Request: request line (method, URI, HTTP version), headers, optional body.
Response: status line (HTTP version, status code), headers, optional body.
I/O Models
I/O multiplexing : a single thread monitors many sockets and reads/writes only those that are ready.
Blocking I/O with multithreading : a separate thread per request; simpler but incurs thread‑creation and scheduling overhead.
Select / Poll vs. Epoll Comparison
int select(int maxfdp, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout);
int poll(struct pollfd fds[], nfds_t nfds, int timeout);Select : limited to 1024 descriptors, linear scan of fd_set, requires copying state between user and kernel space.
Poll : replaces fd_set with an array of pollfd , removing the descriptor limit but still copies state.
Epoll : event‑driven, registers interest in each descriptor, adds ready descriptors to a ready list, no practical descriptor limit, avoids linear scans.
All three mechanisms provide I/O multiplexing, but epoll offers superior scalability for high‑concurrency workloads.
Concurrency Handling in Nginx
In practice, Nginx can handle 10,000–30,000 concurrent connections on a typical server after tuning CPU and memory resources.
Laravel Tech Community
Specializing in Laravel development, we continuously publish fresh content and grow alongside the elegant, stable Laravel framework.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.