Unlock Faster Linux Performance with Huge Pages: Why and How
This article explains Linux huge pages, their performance benefits, implementation details, configuration steps, and the impact on memory reporting, including guidance on using and disabling Transparent Huge Pages for optimal system tuning.
Huge Pages
Page‑based memory management splits virtual and physical memory into equal‑sized pages and translates them via the MMU and page tables.
In a 32‑bit system with a 4 GB virtual address space and 4 KB pages, there can be up to one million pages, creating sizable page‑table structures and potentially low TLB hit rates; larger pages reduce table entries but may waste space.
Thus, page‑size selection is a trade‑off between time and space, and Linux defaults to 4 KB pages.
$ getconf PAGE_SIZE
4096High‑performance scenarios often use larger pages such as 1 GB, 2 GB, or 10 GB, known as "Huge Pages".
Huge pages improve performance mainly by:
Reducing the number of page‑table entries, speeding up lookups.
Increasing the TLB hit rate.
The TLB is a small hardware cache (typically 16–128 entries); a 1 GB huge page can map 16–128 GB of memory within the TLB.
Even in high‑performance contexts, larger pages are not always better because they can slow address lookup; optimal settings require thorough testing.
Linux Huge Pages
Since Linux kernel 2.6, the concept of Huge pages has been introduced to support high‑performance workloads. Common sizes are 2 MB and 1 GB; 2 MB suits GB‑scale memory, while 1 GB suits TB‑scale memory.
Implementation Principle
Linux implements Huge pages with two concepts: hugetlb and hugetlbfs .
hugetlb records entries in the TLB that point to Huge pages.
hugetlbfs is a special (memory) filesystem.
hugetlbfs lets applications flexibly set page sizes without changing global kernel configuration.
The kernel allocates huge pages via hugetlb entries and exposes them through the hugetlbfs filesystem.
Regular Page allocation flow : When an application requests memory, it accesses the page table to obtain a physical address.
Huge Page allocation flow : After configuring Huge pages, applications still use the normal page table, but an additional Hugepage attribute is added. Declaring this attribute lets the system allocate a huge‑page entry.
Regular and huge pages share a page table; the kernel supports huge pages with minimal code.
Benefits: for a 2 MB allocation, using 4 KB pages requires 512 pages, 512 TLB entries, 512 page‑table entries, and at least 512 TLB misses and page‑faults. Using 2 MB pages needs only one TLB miss and one page‑fault.
Huge pages also reduce system‑management and CPU overhead for page lookups, improving overall performance.
Configuration and Usage
Huge pages need contiguous memory, so allocating them at boot is the most reliable method. Three key kernel parameters control them:
hugepages : Number of permanent huge pages allocated at boot (default 0).
hugepagesz : Size of each huge page (2 MB or 1 GB; default 2 MB).
default_hugepagesz : Default huge‑page size at boot.
When huge pages are enabled, swap is usually disabled because the two are contradictory.
Step 1 : Verify huge pages are enabled.
$ grep -i HugePages_Total /proc/meminfo
HugePages_Total: 0Step 2 : Check if hugetlbfs is mounted.
$ mount | grep hugetlbfs
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)Step 3 : Mount manually if not present.
$ mkdir /mnt/huge_1GB
$ mount -t hugetlbfs nodev /mnt/huge_1GB
$ vim /etc/fstab
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0Step 4 : Edit grub2 to allocate, e.g., ten 1 GB pages.
$ vim /etc/grub2.cfg
# add to linux16 line:
default_hugepagesz=1G hugepagesz=1G hugepages=10Step 5 : Reboot and view detailed huge‑page info.
$ cat /proc/meminfo | grep -i Huge
AnonHugePages: 1433600 kB # anonymous huge pages
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kBStep 6 : If HugePages_Total remains 0, set the desired number.
$ sysctl -w vm.nr_hugepages=10
# or
$ echo 'vm.nr_hugepages = 10' > /etc/sysctl.conf
$ sysctl -pNote: Huge pages are typically reserved for specific applications (e.g., Oracle SGA). Other processes cannot share this memory, so configure size according to physical memory and workload to avoid waste or OOM crashes.
To see which processes use huge pages:
grep -e AnonHugePages /proc/*/smaps | awk '{if(2>4)print0}' | awk -F "/" '{print0;system("ps -fp"3)}'Impact on Memory Reporting
Memory allocated for huge pages is counted as used even if not actively accessed, causing commands like free to show high usage while top or ps report low %MEM.
Example on a 32 GB system with 12 GB huge pages:
$ free -g
total used free shared buff/cache available
Mem: 31 16 14 0 0 14
Swap: 3 0 3Top output sorted by memory usage:
Process memory usage via ps -eo uid,pid,rss,trs,pmem,stat,cmd:
Blindly increasing huge‑page allocation can lead to memory waste and starvation for normal processes.
Transparent Huge Pages (THP)
Transparent Huge Pages (THP) were introduced in RHEL 6 to simplify huge‑page usage. THP automatically creates, manages, and uses huge pages, allowing any process to request or release them.
Unlike pre‑allocated huge pages, THP allocates dynamically, making it more convenient for developers, though it may be advisable to disable THP for certain workloads.
Manually disable THP :
$ echo never > /sys/kernel/mm/transparent_hugepage/enabled
$ echo never > /sys/kernel/mm/transparent_hugepage/defrag
$ cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
# [always] = enabled, [never] = disabled, [madvise] = conditionalPermanently disable THP :
vim /etc/grub2.cfg
# add to cmdline:
transparent_hugepage=neverHow this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
