Why Cgroups v2 Simplifies Container Resource Management
This article explains the background of Linux cgroups, outlines the key differences between v1 and v2—including the unified hierarchy, controller handling, and new control files—and provides practical commands and examples for using cgroups v2 in container environments.
Background
Earlier articles introduced Cgroups v1, which provides the foundation for container virtualization, but its multiple hierarchies and controller management became complex, especially when limiting I/O for multi‑tenant workloads such as those using CephFS in Kubernetes.
Because Cgroups v1 allowed processes to belong to multiple hierarchies, the system grew confusing and controllers could not cooperate effectively, leading to the kernel’s shift toward a single unified hierarchy starting with kernel 3.16.
Cgroups v2 Changes
Cgroups v2 replaces the multi‑hierarchy approach with a unified hierarchy where all controllers are mounted under a single tree. Both v1 and v2 can coexist, but the same controller cannot be mounted in both versions simultaneously.
All controllers are mounted under one unified hierarchy.
Processes can only be attached to the root ("/") or leaf nodes of the cgroup tree.
Available controllers are specified via cgroup.controllers and enabled ones via cgroup.subtree_control.
Legacy files such as the v1 tasks file and cgroup.clone_children in the cpuset controller are removed.
Empty‑cgroup notifications are improved using the cgroup.events file.
Unified Hierarchy
The unified hierarchy eliminates the need for multiple hierarchies, simplifying controller management. You can mount Cgroups v2 with the following command, which automatically mounts all available controllers.
mount -t cgroup2 none $MOUNT_POINTControllers
Cgroups v2 supports the following controllers (available since the indicated kernel versions):
io (since Linux 4.5)
memory (since Linux 4.5)
pids (since Linux 4.5)
perf_event (since Linux 4.11)
rdma (since Linux 4.11)
cpu (since Linux 4.15)
Subtree Control
Each cgroup contains two important files: cgroup.controllers – a read‑only file listing all controllers available to that cgroup. cgroup.subtree_control – a read‑write file listing which of those controllers are enabled for the subtree; it must be a subset of cgroup.controllers.
To enable or disable controllers, write space‑separated entries prefixed with "+" (enable) or "-" (disable). Example:
echo '+pids -memory' > x/y/cgroup.subtree_control"No internal processes" Rule
Unlike v1, Cgroups v2 only allows processes to be attached to leaf nodes, preventing attachment to any subgroup that already has an enabled controller.
cgroup.events File
The cgroup.events file replaces the v1 release_agent and notify_on_release mechanisms. It is a read‑only file where each line contains a key‑value pair separated by a space. Currently it only contains the key populated, where 0 means the cgroup has no processes and 1 means it contains processes.
cgroup.stat File
Every cgroup in the v2 hierarchy includes a read‑only cgroup.stat file with key‑value pairs. It currently provides: nr_descendants – the number of live sub‑cgroups. nr_dying_descendants – the number of sub‑cgroups that have been terminated.
Descendant Limits
Cgroups v2 also offers two files to control the number of descendant cgroups: cgroup.max.depth (since Linux 4.14) – defines the maximum depth of sub‑cgroups; 0 forbids creation, max removes the limit. cgroup.max.descendants (since Linux 4.14) – sets the maximum number of active sub‑cgroups; max means unlimited.
Illustration of Cgroups v2 Structure
Process Attachment Rule Diagram
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
