Backend Development 15 min read

Nginx Shared Memory Allocation and Slab Management Mechanism

This article explains how Nginx obtains and releases shared memory on Unix and Windows, describes the unified ngx_shm_alloc/ngx_shm_free interfaces, and details the slab allocator’s page‑level and small‑block management, including allocation strategies, bitmap handling, and fragmentation issues.

Xueersi Online School Tech Team
Xueersi Online School Tech Team
Xueersi Online School Tech Team
Nginx Shared Memory Allocation and Slab Management Mechanism

The source code for Nginx shared memory is located in src/core/ngx_slab.c and src/os/unix/ngx_shmem.c .

Acquisition and Release

On Unix systems there are three ways to obtain shared memory: (1) anonymous mmap, (2) the special file /dev/zero , and (3) shmget . The Windows version uses CreateFileMapping for compatibility.

These three methods are described in detail in the "15.9. Shared Memory" chapter of Advanced Programming in the UNIX Environment . Nginx selects the appropriate implementation at compile time using the macros NGX_HAVE_MAP_ANON , NGX_HAVE_MAP_DEVZERO , and NGX_HAVE_SYSVSHM , providing a uniform interface for all platforms.

The process of creating shared memory is essentially a request to the operating system, and releasing it returns the memory to the system. Nginx first obtains a raw memory region ("bare memory") and then uses its slab mechanism to divide the region into pages for process allocation.

The raw allocation and release functions are ngx_int_t ngx_shm_alloc(ngx_shm_t *shm) and void ngx_shm_free(ngx_shm_t *shm) , each having platform‑specific implementations.

Implementation 1: anonymous mmap

Implementation 2: /dev/zero

Implementation 3: shmget

Implementation 4: Windows

Nginx provides a platform‑independent interface for allocating and freeing anonymous shared memory; the allocated region is zero‑initialized.

Management

Not all shared memory needs paging, but to use it efficiently Nginx employs the slab mechanism, allowing all worker processes to request and release memory via slab interfaces. Some shared memory (e.g., statistics in ngx_event_module_init ) is used without slab management.

The slab pool structure, shown below, describes the state of the shared memory.

The initialization code produces the layout illustrated in Figure 4‑1; note that the final 4096[k] entry reflects alignment of the start pointer, so the total page count is k‑m .

Nginx divides memory into ten size classes (Figure 4‑2). The size field is the user’s request; alloc is the actual system allocation.

For requests of 2048 bytes or larger, Nginx allocates whole pages: the number of pages is (size‑1)/4096+1 (or (size>>ngx_pagesize_shift)+((size%ngx_pagesize)?1:0) ). The allocation function is ngx_slab_alloc_pages (illustrated below).

When a page is allocated, the address of the usable memory is computed as pool->start + (page‑pool->pages)*4096 (or pool->start + (page‑pool->pages)< ). The pointer arithmetic (char*)page‑(char*)pool->page divided by sizeof(ngx_slab_page_t) yields the page index.

Because Nginx never coalesces adjacent free pages, fragmentation can occur: even if the total number of free pages is sufficient, a request may fail if no single free block is large enough, revealing a limitation of the shared‑memory slab allocator.

For small allocations (8 B – 2047 B), Nginx uses a slot array where each slot corresponds to a fixed block size (e.g., slot[0] = 8 B, slot[1] = 16 B, …). Each slot is a doubly‑linked circular list of pages that contain free blocks. A bitmap at the beginning of a page records which blocks are occupied; the slab member of the page structure stores the shift value (block size = 1<<shift ) and, for larger slots, high bits of slab record allocation status.

The allocation algorithm distinguishes four cases:

Requests ≥ 2048 B: allocate whole pages.

Requests < ngx_slab_exact_size (128 B): search the slot’s page list for a free block, use a bitmap to mark allocation, or allocate a new page and initialise the bitmap.

Requests = ngx_slab_exact_size: use the slab field of the page as a bitmap; a value of 0xFFFFFFFF means the page is fully allocated.

Requests > ngx_slab_exact_size but < page size: use the high bits of slab to record block usage and the low 4 bits to store the shift value.

Each page’s prev pointer also encodes the page type in its two lowest bits, so page structures must be 4‑byte aligned.

Freeing memory mirrors allocation: when a block is freed from a fully‑used page, the page is re‑inserted into the appropriate slot list, and the bitmap or slab field is updated accordingly.

In summary, Nginx uses shared memory in two ways: raw memory that the application manages directly, and memory managed by the slab allocator. The slab allocator divides memory into pages and further into fixed‑size blocks, tracks allocation with bitmaps or the slab field, and requires 4‑byte alignment for page structures. Fragmentation can occur because free pages are not merged, and small allocations are served from slot‑based page lists with precise bitmap bookkeeping.

memory managementbackend developmentnginxshared memorySlab Allocator
Xueersi Online School Tech Team
Written by

Xueersi Online School Tech Team

The Xueersi Online School Tech Team, dedicated to innovating and promoting internet education technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.