Operations 9 min read

Understanding Linux Cgroups for Container Resource Management

This article explains the fundamentals of Linux control groups (cgroups), their components and relationships, and provides step‑by‑step guidance on creating hierarchies, mounting, configuring subsystems, and applying cgroup limits to Docker and Kubernetes containers.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Understanding Linux Cgroups for Container Resource Management

When building software, a monitoring tool is needed to extend its lifecycle; with the maturity of container technology, managing resources via Linux control groups (cgroups) becomes essential for container monitoring.

Cgroups, short for control groups, are a Linux kernel mechanism that limits, records, and isolates the physical resources used by a group of processes, forming the foundation for Docker, LXC and other container runtimes.

The main functions of cgroups include resource limiting (e.g., memory caps that trigger OOM), prioritization through CPU time‑slice and I/O bandwidth allocation, accounting for usage statistics, and control operations such as suspending or resuming tasks.

Cgroups consist of four core concepts: task (a single process), cgroup (a group of tasks bound to one or more subsystems), subsystem (a resource controller like cpu, memory, blkio), and hierarchy (a tree of cgroups that share the same set of subsystems).

Key relationship rules are: a hierarchy can attach multiple subsystems, each subsystem can belong to only one hierarchy, and every new hierarchy starts with a root cgroup containing all existing tasks; a task can belong to only one cgroup per hierarchy but may appear in different hierarchies simultaneously.

To use cgroups, you first create a hierarchy and mount it, attaching the desired subsystems with commands such as mount -t cgroup -o cpu,memory none /sys/fs/cgroup/myhier. You can later detach or re‑attach subsystems using mount -o remount. Unmounting is done with umount /sys/fs/cgroup/myhier. Control groups are created with mkdir /sys/fs/cgroup/myhier/mygroup, parameters are set by writing to files (e.g., echo 0-1 > cpuset.cpus), and processes are moved by echoing their PID into the tasks file.

The available subsystems include blkio (block I/O control), cpu, cpuacct (CPU accounting), cpuset (CPU and memory node assignment), memory (usage reporting and limits), devices (device access control), net_cls (network packet tagging), freezer (suspend/resume), and ns (namespace isolation).

Practical Docker examples show how setting --cpu-shares creates separate cgroups under /sys/fs/cgroup/cpu/docker/, and how adjusting the value changes the CPU share distribution; memory limits are enforced by writing to memory.limit_in_bytes, causing the container to be killed when the limit is exceeded.

When using Kubernetes, containers are placed under /sys/fs/cgroup/cpu/kubepods/ with cgroup configurations derived from the pod’s YAML file; inspecting the host reveals the assigned cpu.shares and memory.limit_in_bytes values.

For further reading, consult the Red Hat Enterprise Linux resource‑management guide, the official kernel cgroup documentation, and articles on Docker’s kernel knowledge of cgroups.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

operationscontainercgroupsresource-management
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.