Backend Development 11 min read

How Nginx’s Multi‑Process Architecture Powers High‑Performance Web Serving

This article explains Nginx’s multi‑process model, detailing the roles of master and worker processes, the HTTP connection lifecycle, event‑driven architecture, module types, and performance comparisons with Apache, while also covering I/O models like select, poll, and epoll.

Efficient Ops
Efficient Ops
Efficient Ops
How Nginx’s Multi‑Process Architecture Powers High‑Performance Web Serving

Nginx Process Model

Multi‑process : one master process and multiple worker processes.

Master process : manages worker processes, receives external signals, and forwards them internally.

External interface: receives signals from outside.

Internal forwarding: uses signals to control workers based on external operations.

Monitoring: watches worker status and automatically restarts any worker that terminates unexpectedly.

Worker processes : all workers are equal.

Actual handling: workers process network requests.

Worker count: configured in

nginx.conf

, usually set to the number of CPU cores to maximize resource utilization while avoiding excessive context switching.

HTTP Connection Establishment and Request Handling

When Nginx starts, the master process loads the configuration file.

The master process initializes listening sockets.

The master process forks multiple worker processes.

Worker processes compete for new connections; the winner completes the three‑way handshake, establishes a socket, and processes the request.

Nginx High Performance and Concurrency

Nginx uses a multi‑process + asynchronous non‑blocking model (I/O multiplexing with epoll).

Full request flow: establish connection → read and parse request → process request → send response.

At the low level, this corresponds to read/write socket events.

Nginx Event Handling Model

In Nginx, an HTTP request follows three basic steps:

Receive request : read request line and headers line‑by‑line, then read the body if present.

Process request .

Return response : generate response line, headers, and body based on processing results.

Modular Architecture

Nginx modules are grouped by functionality:

event module : provides an OS‑independent event handling framework (e.g.,

ngx_events_module

,

ngx_event_core_module

,

ngx_epoll_module

).

phase handler : processes client requests and generates response content (e.g.,

ngx_http_static_module

for static files).

output filter : modifies output content, such as injecting footers or rewriting image URLs.

upstream : implements reverse‑proxy functionality, forwarding requests to backend servers and returning their responses.

load‑balancer : selects a backend server based on a load‑balancing algorithm.

Common Issues Analysis

Nginx vs. Apache

Network I/O model: Nginx uses I/O multiplexing (epoll on Linux, kqueue on FreeBSD); Apache uses blocking I/O with multiple processes or threads.

High performance and high concurrency with low resource consumption.

Apache offers greater stability, fewer bugs, and richer module ecosystem.

Nginx Maximum Connections

Key points:

Each worker process handles a limited number of file descriptors (fd) defined by

ulimit -n

.

Maximum connections = number of workers × maximum connections per worker.

When acting as a reverse proxy, the effective maximum is half of that value because each client connection also opens a connection to the backend.

I/O Models

Scenarios for handling many requests:

I/O multiplexing : a single thread monitors multiple sockets and processes the ready ones.

Blocking I/O + multithreading : each request spawns a new service thread.

select/poll vs. epoll

System call signatures:

<code>int select(int maxfdp, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout);
int poll(struct pollfd fds[], nfds_t nfds, int timeout);</code>

select : limited to 1024 fds, linear scan, copies state between kernel and user space.

poll : replaces the fixed‑size fd_set with a dynamic array, removing the hard limit.

epoll : event‑driven, registers interest events per fd, adds ready fds to a ready list, supports virtually unlimited fds (limited only by OS resources).

Nginx Concurrency Capability

After proper tuning, Nginx can sustain peak concurrent connections of roughly 10,000–30,000, depending on memory and CPU core count.

Backend Developmentnginxevent-drivenIO Multiplexingprocess model
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.