Fundamentals 12 min read

Understanding Linux Memory Management: Physical & Virtual Address Mapping

This article explains how Linux organizes physical memory, introduces virtual addressing, describes the structure of pages, zones, and nodes, and details the mechanisms of big‑ and small‑memory allocation, virtual address layout, page‑table translation, TLB caching, and virtual memory swapping.

Linux Cloud Computing Practice
Linux Cloud Computing Practice
Linux Cloud Computing Practice
Understanding Linux Memory Management: Physical & Virtual Address Mapping

1. Physical Memory

Linux divides memory into three levels: Page (basic 4 KB unit), Zone (contains multiple queues to manage pages), and Node (each CPU has a node that includes ZONE_DMA, ZONE_NORMAL, and ZONE_HIGHMEM). When a CPU exhausts its local memory, it can request memory from other CPUs.

Physical Memory Allocation

Big memory is allocated using the buddy system , which groups pages into blocks of sizes 1, 2, 4, 8 … up to 1024 pages. The system searches for the smallest suitable block, splits larger blocks if necessary, and inserts unused blocks back into the appropriate free list.

Small memory allocation uses SLUB , which creates caches from individual pages; allocations and deallocations are performed by linking and unlinking objects in these caches without clearing memory.

2. Organizing Virtual Addresses

Virtual addresses form a virtual space that maps to physical memory. The virtual space is divided into user space and kernel space . In 32‑bit systems, the space is split 1:3 between kernel and user; in 64‑bit systems, each receives 128 TB.

User‑space structure includes ranges for Text (code), Data, BSS (global variables), Heap, Stack, and mmap regions. Each process has its own struct storing these ranges.

Kernel‑space structure consists of a direct‑mapping area (fixed 896 MB mapping to ZONE_DMA and ZONE_NORMAL) and a dynamic‑mapping area that can map any physical address in ZONE_HIGHMEM. Dynamic mappings are of three types: dynamic (re‑map after use), permanent (one‑to‑one mapping until explicitly unmapped), and fixed (restricted to specific functions).

3. Mapping Virtual Addresses to Physical Memory

Virtual addresses are translated to physical addresses via page tables . Each process has its own page table; the kernel shares a single page table. Both virtual and physical memory are divided into 4 KB pages, establishing a one‑to‑one correspondence.

In a 32‑bit system, a virtual address is split into three fields: 10 bits for the page‑table index, 10 bits for the page‑table entry offset, and 12 bits for the offset within the page. This hierarchical indexing reduces the need for contiguous memory for page tables.

TLB

The Translation Lookaside Buffer (TLB) caches recent virtual‑to‑physical address translations in the CPU. A lookup first checks the TLB; if the entry is missing, the page table is consulted.

Virtual Memory

Virtual memory uses a swap partition on disk to hold pages that are not currently needed in RAM. When a page fault occurs, the kernel swaps the required page back into physical memory, allowing programs to use more memory than physically available, albeit with slower performance due to disk I/O.

Summary

Both kernel and user spaces have distinct virtual address ranges that map to physical memory. User processes can only map and access user‑space memory, while kernel space is reserved for kernel operations. System calls transition execution from user to kernel space, and data is copied between the two when necessary.

page tableOperating systemVirtual Address
Linux Cloud Computing Practice
Written by

Linux Cloud Computing Practice

Welcome to Linux Cloud Computing Practice. We offer high-quality articles on Linux, cloud computing, DevOps, networking and related topics. Dive in and start your Linux cloud computing journey!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.