Backend Development 18 min read

Understanding Nginx Architecture: Daemon Processes, Workers, Connections, and Core Data Structures

This article explains Nginx's high‑performance architecture, covering its daemon mode with master and worker processes, the thundering‑herd problem, advantages of process‑based concurrency, asynchronous non‑blocking I/O, connection handling, keep‑alive and pipeline techniques, as well as key internal data structures such as arrays, queues, lists, strings, memory pools, hash tables, and red‑black trees.

Architect
Architect
Architect
Understanding Nginx Architecture: Daemon Processes, Workers, Connections, and Core Data Structures

Nginx Overview

Nginx ("engine x") is a high‑performance HTTP and reverse‑proxy server that also supports IMAP/POP3/SMTP.

Daemon and Worker Processes

After startup Nginx runs as a daemon with a master process and multiple worker processes. The master manages workers, handling signals, monitoring status, and restarting workers on failure. Workers handle network events independently; each request is processed by a single worker, and the number of workers is typically set to match CPU cores. CPU binding can be used to reduce context‑switch overhead.

Thundering Herd Problem

All workers inherit the listening socket from the master; when a new connection arrives, all workers are notified but only one successfully accepts it, while the others receive an accept failure.

Advantages of Process‑Based Concurrency

Processes do not share resources, eliminating the need for locks.

Failure of one process does not affect others; the master can quickly spawn a replacement.

Programming is simpler compared to multithreading.

Multithreading Issues

Threads consume more memory and incur high context‑switch costs under heavy concurrency, as seen in Apache's thread‑per‑request model.

Asynchronous Non‑Blocking Model

Nginx uses an event‑driven, non‑blocking architecture: no threads are created per request, and events are processed with minimal overhead.

No thread creation; each request uses little memory.

No context switches; event handling is lightweight.

Testing by Taobao's tengine team showed 2 million concurrent requests on a 24 GB machine.

Connection Handling

Connections are TCP based (SOCK_STREAM). Nginx parses configuration, creates a master process, opens listening sockets, forks workers, and each worker accepts connections, creating an ngx_connection_t structure to store client information.

struct ngx_connection_s {
void               *data;
ngx_event_t        *read;
ngx_event_t        *write;
ngx_socket_t        fd;
ngx_recv_pt         recv;
ngx_send_pt         send;
ngx_recv_chain_pt   recv_chain;
ngx_send_chain_pt   send_chain;
ngx_listening_t    *listening;
off_t               sent;
ngx_log_t          *log;
ngx_pool_t         *pool;
struct sockaddr    *sockaddr;
socklen_t           socklen;
ngx_str_t           addr_text;
/* ... other fields ... */
};

Connection Pool

Each worker has a connection pool sized by worker_connections . Free connections are kept in a linked list free_connections to avoid frequent allocation and deallocation.

Accept Mutex

To prevent contention when multiple workers accept simultaneously, Nginx uses accept_mutex . Only the worker holding the mutex adds accept events.

ngx_accept_disabled = ngx_cycle->connection_n / 8 - ngx_cycle->free_connection_n;
if (ngx_use_accept_mutex) {
if (ngx_accept_disabled > 0) {
ngx_accept_disabled--;
} else {
if (ngx_trylock_accept_mutex(cycle) == NGX_ERROR) { return; }
if (ngx_accept_mutex_held) { flags |= NGX_POST_EVENTS; }
else { if (timer == NGX_TIMER_INFINITE || timer > ngx_accept_mutex_delay) { timer = ngx_accept_mutex_delay; } }
}
}

HTTP Request Processing

Requests are parsed into ngx_http_request_t , which stores method, URI, headers, body, and processing state.

struct ngx_http_request_s {
uint32_t                          signature;
ngx_connection_t                 *connection;
void                            **ctx;
void                            **main_conf;
void                            **srv_conf;
void                            **loc_conf;
ngx_http_event_handler_pt         read_event_handler;
ngx_http_event_handler_pt         write_event_handler;
/* ... many fields ... */
ngx_uint_t                        method;
ngx_uint_t                        http_version;
ngx_str_t                         request_line;
ngx_str_t                         uri;
ngx_str_t                         args;
/* ... */
};

The processing flow includes initializing the request, handling headers, processing the body, invoking location handlers, and executing phase handlers (location config, response generation, header sending, body sending).

Keep‑Alive, Pipeline, and Lingering Close

Keep‑alive reuses a TCP connection for multiple requests, reducing handshake overhead. Pipeline allows multiple requests to be sent without waiting for each response, improving throughput. Lingering close delays connection termination to read any remaining client data, avoiding lost ACKs.

Core Data Structures

Arrays

typedef struct {
void        *elts;   /* pointer to data */
ngx_uint_t   nelts;  /* number of elements */
size_t       size;   /* size of each element */
ngx_uint_t   nalloc; /* allocated capacity */
ngx_pool_t  *pool;   /* memory pool */
} ngx_array_t;

Queues

struct ngx_queue_s {
ngx_queue_t  *prev;
ngx_queue_t  *next;
};

Lists

typedef struct {
ngx_list_part_t  *last;
ngx_list_part_t   part;
size_t            size;
ngx_uint_t        nalloc;
ngx_pool_t       *pool;
} ngx_list_t;

Strings

typedef struct {
size_t      len;
u_char     *data;
} ngx_str_t;

Memory Pools

struct ngx_pool_s {
ngx_pool_data_t       d;
size_t                max;
ngx_pool_t           *current;
ngx_chain_t          *chain;
ngx_pool_large_t     *large;
ngx_pool_cleanup_t   *cleanup;
ngx_log_t            *log;
};

Hash Tables

typedef struct {
ngx_hash_elt_t  **buckets;
ngx_uint_t        size;
} ngx_hash_t;

Red‑Black Trees

struct ngx_rbtree_node_s {
ngx_rbtree_key_t       key;
ngx_rbtree_node_t     *left;
ngx_rbtree_node_t     *right;
ngx_rbtree_node_t     *parent;
u_char                 color;
u_char                 data;
};
struct ngx_rbtree_s {
ngx_rbtree_node_t     *root;
ngx_rbtree_node_t     *sentinel;
ngx_rbtree_insert_pt   insert;
};

Conclusion

The article provides a comprehensive overview of Nginx's internal architecture, from process management and connection handling to the essential data structures that enable its high performance and scalability.

Backend DevelopmentProcess ManagementNginxData Structuresasynchronous I/Oconnection pooling
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.