How Nginx Handles Millions of Concurrent Connections: Inside Its Master‑Worker and Event‑Driven Architecture
This article explains Nginx's core Master‑Worker process model, high‑performance event‑driven design, I/O multiplexing with epoll, and asynchronous non‑blocking I/O, showing how these techniques enable the server to sustain millions of simultaneous connections.
Nginx is an essential middleware for large‑scale architectures. This article explains the technologies that enable Nginx to handle millions of concurrent connections.
Nginx Core Architecture
Nginx uses a classic Master‑Worker process model, which is the foundation of its high performance and high availability.
The overall architecture is shown below:
<code>+-------------+
| Master |
+------+------+
+------------+------------+
| Worker1 |
| Worker2 |
+----+----+</code>The Master process reads and parses configuration files and manages the lifecycle of Worker processes (start, stop, restart, etc.).
Worker processes handle client requests such as HTTP, TCP/UDP connections. Typically the number of Workers is set to the number of CPU cores or twice that, to fully utilize multi‑core CPUs. Multiple Workers can process requests in parallel.
High‑Performance Event‑Driven Model
The event‑driven model is the core that allows Nginx to process millions of concurrent connections.
Unlike the traditional “one thread per connection” model, Nginx uses an event‑driven approach.
As shown below:
<code>1. epoll_wait waits for ready events (e.g., connection/read/write)
2. When an event is ready, the corresponding handler is executed (e.g., read data)
3. Continue listening for the next batch of events</code>The program does not passively wait for tasks; it actively listens and responds to events. When an event occurs (data arrival, timer, user action), the registered callback handles it. Nginx only reacts when events are ready, greatly reducing idle time and blocking.
I/O Multiplexing
The underlying support for the event‑driven model is the operating system’s I/O multiplexing mechanism.
On Linux, Nginx uses
epoll, which offers high performance.
epolluses an “event‑ready notification” mechanism: when a file descriptor becomes ready, the kernel adds it to epoll’s ready list. Compared with select/poll, epoll supports millions of connections and avoids unnecessary traversal.
Asynchronous Non‑Blocking I/O
All I/O operations in Nginx are asynchronous and non‑blocking, meaning read/write calls return immediately without waiting for data.
This ensures that Worker processes never block on I/O and remain busy handling ready events.
<code>Client socket → register read event (non‑blocking) → wait for data → event triggers → callback reads → register write event → data written → keepalive or close</code>In summary, asynchronous non‑blocking I/O combined with the event‑driven model and I/O multiplexing is the core of Nginx’s high‑performance capability.
Mike Chen's Internet Architecture
Over ten years of BAT architecture experience, shared generously!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.