Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752
Visiting KubeCon North America? See us at Booth # 752

Cgroups: Definition, Works, Benefits, Models, Uses in Containers & Features

Author:
Dhanush VM
Reviewed By:
Sanket Modi
Updated on:
November 25, 2025

Contents

    Cgroups in Linux are the foundation for kernel-level resource management, and this guide walks through how they work in practice. You’ll see what cgroups are, the cgroup model and hierarchy, and how cgroup v2 improves cgroup v1 with new features and a unified design. The article then explains how cgroups control CPU, memory, and I/O, how Docker, Kubernetes, and systemd rely on them, and how to monitor, tune, troubleshoot high CPU, and use key documentation, tools, and libraries effectively.

    What are Cgroups ?

    Cgroups, or Control Groups, are a Linux kernel feature that allowss the operating system to limit, isolate, and monitor the system's resources used by processes. By grouping processes together, Cgroups enforce controlled access to CPU time, memory, disk I/O, and network bandwidth, ensuring that no single application can consume more than its allowed share and impact overall system performance. This resource governance is foundational to modern container platforms like Docker and Kubernetes, which rely on Cgroups to provide predictable, isolated environments for running applications.

    What are Cgroups in Linux?

    Cgroups (Control Groups) in Linux are a kernel-level mechanism that lets you allocate, limit, and monitor system resources for specific processes or groups of processes. They give the operating system fine-grained control over how much CPU, memory, I/O, and other resources each process can use, ensuring that applications run in isolated, predictable environments.  

    Widely used by container technologies like Docker, Kubernetes, and systemd, Cgroups prevent any single workload from overwhelming the system and help maintain stable, efficient performance. For a simple cgroup Linux example, you typically mount the cgroup filesystem, create a new group, and then add PIDs into it, which is the basic pattern most cgroup Linux tutorials follow.

    What are the key Benefits of using Cgroups?

    Using cgroups in the Linux kernel gives you precise, per-service resource control instead of managing resources only at the whole-host level.

    Here are the key benefits you gain from using cgroups in Linux: --

    • Fine-grained isolation – cgroup hierarchies let you partition CPU, memory, and I/O per service or container, so noisy neighbors cannot starve critical workloads.  
    • Predictable performance – you use cgroups to cap or reserve resources, keeping latency-sensitive applications stable under heavy load.  
    • Multi-tenant safety – multiple teams, services, or containers can share a host with strict limits, reducing the blast radius of runaway processes.  
    • Accurate accounting – per-cgroup usage metrics make it clear which service is consuming which resources.  
    • Policy enforcement – platform and SRE teams use cgroups to enforce standard limits and guardrails directly at the kernel level.  

    What is the Cgroup model and how is it structured?

    In Linux, the cgroup model is a kernel feature that arranges a collection of processes into hierarchical groups, so resource controllers can manage system resources like CPU time and memory per group instead of per host.

    Here’s how the cgroup model is structured in Linux: --

    • Hierarchical groups and root cgroup – All cgroups form a tree with a single root cgroup at the top and child cgroups below it.  
    • Membership and tasks in a cgroup – Each process is a member of exactly one cgroup in a hierarchy, tracked by its PID in a cgroup file.  
    • Cgroup virtual filesystem layout – Each cgroup directory in the cgroup virtual filesystem represents one cgroup node in the tree.  
    • Cgroup subsystems and controllers – Cgroup subsystems (controllers) attach to cgroups and enforce resource limits and accounting; typical controllers include the CPU controller, memory controller, blkio controller, net_cls controller, freezer controller, and pids controller.  
    • Two versions of cgroups – Cgroup v1 supports multiple hierarchies, while cgroups v2 uses a single unified hierarchy for all controllers.  
    • Use in containers Container runtimes like Docker and Kubernetes create separate cgroups so each container has its own resource-controlled group.

    Put Cgroups to work with lean, secure containers.

    ‍Download Free Container Images

    What is Cgroup v2 and Cgroup v1 vs v2?

    In Linux, cgroup v2 is the second-generation feature of the Linux kernel that replaces the fragmented design of cgroup v1 with a single, unified hierarchy for resource management, making behavior more consistent and easier to reason.

    The following points compare how cgroup v1 and cgroup v2 behave and are structured: -

    Feature / Behavior  Cgroup v1  Cgroup v2 
    Hierarchy model  Multiple separate hierarchies  Single unified hierarchy 
    Controller attachment  Controllers mount to different trees  All controllers attach to one tree 
    Process placement  A process can exist in different cgroups across controllers  A process lives in exactly one cgroup node per hierarchy 
    Semantics  Inconsistent across controllers  Fully unified and standardized 
    Delegation  Hard to delegate safely; multiple trees cause conflicts  Built-in delegation model for safe subtree ownership 
    Workload management  Harder to reason about locations of limits  One hierarchy makes resource rules easier to trace 
    Adoption  Still used for backward compatibility  Default on most modern Linux distros 
    Container support  Works but causes fragmentation  Designed for containers; predictable behavior 

    What new features were added to Cgroup v2?

    In Linux, Cgroup v2 introduces a unified, more predictable resource management model that replaces the fragmented behavior of v1, and it is documented in Linux kernel documentation and became broadly usable from around Linux 4.5 onward as the preferred resource management guide for new systems.

    The following points are related to the key new features added in cgroup v2.

    • Unified hierarchy by default – Instead of multiple trees, v2 uses a single hierarchy with a single default cgroup at the root, so every node is one of the cgroups inside that unified tree, with a clear parent–child structure (except for the root cgroup).  
    • Stronger delegation model – v2 makes it safer to hand off subtrees as new cgroups to other components or tenants; each delegated subtree behaves like a clean separate cgroup tree with well-defined rules provided by cgroups.  
    • Consistent controller semantics – v2 standardizes how controllers behave across the hierarchy, so the effect of limits and distribution is predictable regardless of where a cgroup sits in the tree.  
    • Improved integration with namespaces – While still distinct concepts, cgroup namespaces and cgroups are designed to work together more cleanly in v2, making container isolation more robust.  
    • Cleaner filesystem interface – The v2 cgroup virtual filesystem reduces legacy complexity; although mount -t cgroup is still used, v2 expects one main mount, and when the cgroup filesystem is unmounted, the unified cgroup interface for that tree goes away in a predictable manner.

    How do Cgroups control CPU, Memory, and I/O usage?

    In Linux, cgroups control CPU, memory, and I/O usage by attaching dedicated kernel controllers to groups of processes and enforcing hard limits, priorities, and throttling rules at the cgroup level instead of per process or per host.

    Resource control in cgroups works as follows: -

    • CPU control – The CPU controller assigns CPU shares, quotas, and periods to each cgroup, so the scheduler decides how much CPU time each group gets; if a cgroup hits its quota, its tasks are throttled until the next period.  
    • Memory control – The memory controller enforces hard and soft memory limits, tracks page usage, and can trigger out-of-memory (OOM) handling a specific cgroup rather than the whole system when it exceeds its configured limit.  
    • I/O (block I/O) control – The I/O controllers cap or weight disk read/write bandwidth and IOPS per cgroup, ensuring that heavy workloads cannot saturate disks and starve latency-sensitive services.  
    • Per-group accounting and isolation – Each controller exposes per-cgroup statistics files (for example, CPU usage, memory current usage, I/O bytes) so you can both see and enforce how much of each resource a group is allowed to consume.

    How do Docker and Kubernetes use Cgroups for containers?

    In Linux, Docker and Kubernetes use cgroups to give each container or Pod its own resource-controlled group for CPU, memory, and I/O.

    The points below show how Docker and Kubernetes use cgroups:-    

    Aspect 

    Docker and cgroups 

    Kubernetes and cgroups 

    Basic isolation 

    Creates a separate cgroup for each container. 

    Creates a pod-level cgroup for all containers in a Pod. 

    Resource limits 

    --cpus, -- memoryy map to cgroup CPU and memory limits. 

    Pod requests and limits map to Pod-level cgroup quotas. 

    Enforcement workflow 

    Runtime writes container PIDs and limits into cgroup files. 

    Kubelet/runtime createss Pod cgroups and assign container PIDs. 

    CPU control 

    Uses CPU shares/quotas so one container cannot monopolize CPU. 

    Enforces Pod CPU guarantees and caps per scheduling policy. 

    Memory control 

    Sets memory limits, so OOM is contained to that container. 

    Applies Pod memory limits to protect other Pods on the node. 

    I/O control 

    Can cap container disk bandwidth and IOPS via cgroups. 

    Use the same I/O controls at the Pod cgroup subtree. 

    Multi-tenant safety 

    Gives each service its own cgroup subtree on a host. 

    Structures node cgroup hierarchies per Pod or namespace. 

    Integration with namespaces 

    Combines namespaces and cgroups for isolation plus limits. 

    Applies the same model at Pod and node level for clusters.

    How do Docker containers use Cgroups for resource limits and isolation?

    Docker containers use cgroups in the Linux kernel to give each container its own resource-controlled group, so limits and accounting for CPU, memory, and I/O apply to the container as a unit rather than to the whole host or individual processes.

    Keyways Docker applies cgroups for resource control: -

    • Per-container cgroup – Docker creates a separate cgroup (or subtree) for each container and adds all container processes to it.  
    • CPU limits and shares – Flags like **–cpus**, **–cpu-shares**, and *–cpu-quota** are translated into cgroup CPU controller settings, which throttle or prioritize the container’s CPU usage.  
    • Memory limits and OOM handling – Options like **-m / --memory** configure cgroup memory limits; if a container exceeds its limit, the kernel’s OOM killer targets that container’s processes, not the entire node.  
    • I/O throttling – Docker can configure block I/O cgroup controllers to cap disk bandwidth and IOPS, preventing a single container from saturating storage.  
    • Strong isolation on shared hosts – Because each container lives in its own cgroup, noisy or misbehaving workloads are isolated, giving predictable performance and safer multi-tenant operation.

    For reliable, resource-isolated deployments, use trusted builds

    ‍Pull Trusted Container Images for Free

    How do Cgroups and Systemd work together on Linux?

    On Linux, systemd uses cgroups as its core process and resource management mechanism: every service, slice, and scope that systemd manages is placed into its own cgroup, so the Linux kernel can track and enforce CPU, memory, and I/O limits per unit instead of per host.

    The points below show how cgroups and systemd work together on Linux: --

    • Unified process model – Systemd treats each unit (for example, a service, slice, or scope) as a node in the cgroup hierarchy, so all processes launched by that unit live in the same control group.  
    • Automatic cgroup creation – When you start a systemd service, systemd automatically creates a cgroup for it under the appropriate slice (for example, system.slice, user.slice) and keeps the process tree strictly contained there.  
    • Resource limit mapping – Settings like CPUQuota=, MemoryMax=, and IOReadBandwidthMax= in a systemd unit file are translated into cgroup controller parameters, which the kernel enforces for that unit’s cgroup.  
    • Hierarchical control with slices – Slices (system.slice, user.slice, machine.slice) form higher-level cgroups that group related services, allowing admins to cap or prioritize whole classes of workloads, not just individual daemons.  
    • Runtime introspection and tuning – Tools like systemd-cgls, systemctl status, and systemd-cgtop read cgroup data to show live resource usage, and you can adjust limits at runtime with systemctl set-property, which updates the underlying cgroup settings.  
    • Clean startup and shutdown – Because systemd track every process in a service’s cgroup, it can reliably stop, restart, or isolate services (and their children) without leaving orphaned processes running outside its control.  

    How do you monitor, measure, and tune Cgroups?

    In Linux, you monitor, measure, and tune cgroups by reading per-cgroup stats from the cgroup filesystem, using live monitoring tools, and then iteratively adjusting CPU, memory, and I/O limits based on real workload behavior.

    Here’s how to monitor and tune cgroups:

    • Read cgroup stats – Check files like cpu.stat, memory.current, and io.stat in each cgroup directory to see actual usage.
    • Use monitoring tools – Run cgtop, systemd-cgtop, or systemd-cgls to watch per-cgroup CPU, memory, and I/O in real time.
    • Test underload – Stress services with realistic traffic while tracking per-cgroup metrics to understand true resource needs.
    • Tune CPU settings – Adjust CPU shares, quotas, and periods, then recheck latency and contention between cgroups.
    • Tune memory limits – Set memory limits with a small safety margin above observed peaks to avoid unnecessary cgroup OOM kills.
    • Tune I/O control – Apply I/O weights or throttles,, so critical workloads retain bandwidth while background jobs are slowed.

    How do you investigate high CPU issues related to Cgroups?

    To investigate high CPU issues related to cgroups in Linux, you first identify which cgroup is consuming CPU, then inspect its processes, limits, and throttling stats, and finally adjust configuration or fix the offending workload based on what you find.

    The points below outline how to troubleshoot high CPU issues using cgroups.  

    • Locate the hot cgroup – Use tools like cgtop or systemd-cgtop to see which cgroup or systemd unit is consuming the most CPU time.
    • Map cgroup to processes – Run systemd-cgls, ps with cgroup columns, or inspect the cgroup directory to list processes in that cgroup and identify the specific binaries or services causing load.
    • Check CPU stats and throttling – Read cpu.stat (or equivalent v2 files) for that cgroup to see usage, throttled time, and number of throttling events; this tells you whether the issue is overusedd or over-throttling.
    • Review CPU limits and shares – Inspect CPU quota, period, and shares configured for the cgroup (for example, via unit files or runtime settings) to see if limits are too low for the current workload.
    • Correlate with application behavior – Cross-check spikes in cgroup CPU usage with application logs, deployments, or traffic changes to confirm whether the cause is a code issue, bad query, or a traffic surge.
    • Adjust and validate – Temporarily change CPU limits or shares for the affected cgroup, redeploy or restart if needed, and verify via live metrics that CPU usage and latency return to acceptable levels.

    Where can you find documentation and tools for Cgroups?

    You can find cgroups documentation and tools primarily in official Linux resources, distribution guides, and user-space utilities that expose and manage the cgroup virtual filesystem and metrics.

    The points below list key documentation sources and tools for working with cgroups:-

    • Linux kernel documentation – The upstream Linux kernel documentation and the **cgroups(7) *** man page (and **systemd.resource-control (7) ** on systemd-based systems) explain cgroup versions, controllers, and interfaces in depth.
    • Distribution manuals and guides – Vendor docs such as Red Hat Enterprise Linux and other Linux distribution guides usually provide a resource management guide section covering cgroups v1 and v2, default layouts, and examples.
    • Man,, pages for tools – Utilities like **cgtop**, **cgexec**, **cgcreate**, **systemd-cgtop**, and **systemd-cgls** have detailed man pages that document how to inspect and manage cgroups inside the hierarchy.
    • Cgroup filesystem introspection – The mounted cgroup virtual file system (for example, under /sys/fs/cgroup) is itself a live reference; its per-controller files and directories show available controllers, stats, and tunables.
    • Here’s a tightened version with the word “orchestration” naturally included:
    • Container and orchestration documentation – Docker, container, and Kubernetes orchestration docs explain how they use cgroups to control resources for containers and Pods, and how their flags or YAML fields map directly to cgroup controller settings.

    Turn Cgroup control into real savings.

    ‍Schedule a Demo

    FAQ’S

    1.What is the Windows equivalent of cgroups?

    The closest Windows equivalent to Linux cgroups is Windows Job Objects. They allow grouping processes and applying limits for CPU, memory, and other resources similar to how cgroups manage resource control on Linux.

    Q2. Whatt is the difference between cgroup and a container?

    A cgroup is a kernel feature that controls and limits resources for processes.

    A container is a packaged application environment that uses cgroups (plus namespaces and other features) to isolate and manage resources.  

    Q3.Does Docker use cgroups?

    Yes. Docker uses cgroups.Docker relies on Linux cgroups to enforce and track CPU, memory, I/O, and PID limits for each container.
    Cgroups are one of the core kernel features (along with namespaces) that make container isolation and resource control possible.  

    Q4. What kindd of limitations do cgroups allow?

    Cgroups allow you to limit and control how much CPU, memory, I/O, and other system resources a group of processes can use. They enforce resource boundaries so one workload cannot starve or interfere with others on the same machine.  

    Q5.How many cgroups are created for each container in Docker?

    Docker creates one cgroup per controller for each container. In practice, this means a container ends up with multiple cgroups (CPU, memory, pids, blkio, etc.), which together form the container’s full cgroup set.  

    Q6. When to use cgroups?

    Use cgroups when you need to limit, isolate, or prioritize system resources for specific processes or workloads. They are ideal for preventing resource hogging, improving stability, and enforcing fair sharing on servers, containers, or multi-tenant systems.  

    Q7. What’s the relationship between cgroups and container images?

    Cgroups don’t limit container images they limit the running container created from an image. After you pull container images from your registry, the container runtime starts the container and assigns its processes to a dedicated cgroup so the kernel can enforce CPU, memory, and I/O limits.

    Dhanush VM
    Dhanush V M is a seasoned technology leader with over a decade of expertise spanning DevOps, performance engineering, cloud deployments, and solution architecture. As a Solution Architect at CleanStart, he leads key architectural initiatives, drives modern DevOps practices, and delivers customer-centric solutions that strengthen software supply chain security.
    Share