How Nginx Uses Epoll in a Multi‑Process Architecture
This article explains Nginx's multi‑process design, detailing how the master process handles socket binding and listening while each worker creates its own epoll instance, registers events, and processes connections through a well‑structured event loop with code examples from the source tree.
1. Single‑process network model
In a single‑process model all network operations—socket creation, bind, listen, epoll creation, event registration, and epoll_wait —are performed in the same process. The minimal example below shows an epoll‑based server that accepts connections and processes read/write events.
int main(){
int lfd, efd, fd;
struct epoll_event ep[1024];
// create listening socket
lfd = socket(AF_INET, SOCK_STREAM, 0);
bind(lfd, ...);
listen(lfd, ...);
// create epoll instance and register the listening socket
efd = epoll_create(1024);
epoll_ctl(efd, EPOLL_CTL_ADD, lfd, &(struct epoll_event){.events = EPOLLIN, .data.fd = lfd});
// event loop
for(;;){
int nready = epoll_wait(efd, ep, 1024, -1);
for(int i = 0; i < nready; ++i){
if(ep[i].data.fd == lfd){ // new connection
fd = accept(lfd, NULL, NULL);
epoll_ctl(efd, EPOLL_CTL_ADD, fd, &(struct epoll_event){.events = EPOLLIN|EPOLLET, .data.fd = fd});
} else {
// read/write handling for fd = ep[i].data.fd
}
}
}
}Redis 5.0 and earlier use a very similar loop, achieving tens of thousands of QPS because the workload is almost entirely in‑memory I/O.
2. Why multi‑process?
A single process cannot fully utilize multiple CPU cores. Production servers therefore adopt a multi‑process architecture, which raises questions such as:
Which process performs listen and accept?
Which process discovers read/write events on client sockets?
How are incoming requests distributed among workers?
Is a dedicated computation process required?
Different frameworks answer these questions using variations of the Reactor or Proactor patterns.
3. Nginx case study
3.1 Master process initialization
Nginx separates responsibilities into a Master process and a pool of Worker processes. The Master only creates the listening sockets (bind + listen) and then forks the configured number of Workers.
// src/core/nginx.c
int ngx_cdecl main(int argc, char * const *argv){
ngx_cycle_t *cycle, init_cycle;
// 1.1 open listening sockets
cycle = ngx_init_cycle(&init_cycle);
// 1.2 start master loop
ngx_master_process_cycle(cycle);
return 0;
}The ngx_cycle_t structure holds the array of listening sockets. ngx_master_process_cycle forks Workers and then enters a signal‑handling loop.
3.2 Master main loop
The loop performs two essential actions:
Spawn the configured number of Workers via fork (implemented in ngx_spawn_process).
Enter an event loop that handles signals such as ngx_quit, ngx_restart, etc.
// src/os/unix/ngx_process_cycle.c
void ngx_master_process_cycle(ngx_cycle_t *cycle){
ngx_start_worker_processes(cycle, ccf->worker_processes, NGX_PROCESS_RESPAWN);
for(;;){
// signal handling (quit, reload, restart …)
}
}3.3 Worker process initialization
Each Worker runs ngx_worker_process_cycle. It first calls ngx_worker_process_init to set up modules, resource limits, CPU affinity, etc., then repeatedly invokes ngx_process_events_and_timers.
// src/os/unix/ngx_process_cycle.c
static void ngx_worker_process_cycle(ngx_cycle_t *cycle, void *data){
ngx_worker_process_init(cycle, worker);
for(;;){
ngx_process_events_and_timers(cycle);
// other periodic work
}
}During ngx_worker_process_init the Worker creates its own epoll instance and registers the listening sockets.
// src/event/modules/ngx_epoll_module.c
static ngx_int_t ngx_epoll_init(ngx_cycle_t *cycle, ngx_msec_t timer){
ep = epoll_create(cycle->connection_n / 2);
ngx_event_actions = ngx_epoll_module_ctx.actions;
return NGX_OK;
}3.4 Event registration and processing
The Worker adds each listening socket to epoll with epoll_ctl. When an event occurs, ngx_process_events (a wrapper for ngx_epoll_process_events) invokes the registered handler, typically ngx_event_accept for new connections.
// src/event/modules/ngx_epoll_module.c
static ngx_int_t ngx_epoll_process_events(ngx_cycle_t *cycle, ...){
int events = epoll_wait(ep, event_list, nevents, timer);
for(i = 0; i < events; i++){
rev->handler(rev);
}
return NGX_OK;
}3.5 Accepting a client connection
ngx_event_acceptperforms three steps:
Accept the new socket.
Obtain a free ngx_connection_t via ngx_get_connection.
Add the connection to epoll with ngx_add_conn (which maps to ngx_epoll_add_connection).
// src/event/ngx_event_accept.c
void ngx_event_accept(ngx_event_t *ev){
do{
int s = accept(lc->fd, &sa.sockaddr, &socklen);
if(s){
ngx_connection_t *c = ngx_get_connection(s, ev->log);
if(ngx_add_conn(c) == NGX_ERROR){
ngx_close_accepted_connection(c);
return;
}
}
} while(ev->available);
}The newly created connection’s read handler is set to ngx_http_wait_request_handler, which drives the HTTP request processing pipeline.
4. Overall workflow
The Master only creates, binds, and listens on sockets, then forks Workers.
Each Worker creates its own epoll instance, registers the same listening sockets, and runs an event loop.
Linux’s accept‑mutex (or SO_REUSEPORT when enabled) ensures that only one Worker receives a given connection, preventing the thundering‑herd problem.
The selected Worker accepts the connection, creates a ngx_connection_t, registers it with epoll, and hands it to the HTTP module.
This design isolates network I/O to Workers, allowing Nginx to scale across multiple CPU cores while keeping the Master lightweight.
Refining Core Development Skills
Fei has over 10 years of development experience at Tencent and Sogou. Through this account, he shares his deep insights on performance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
