Cgroups Guide
Cgroups Overview
For a comprehensive description of Linux Control Groups (cgroups) see the cgroups documentation at kernel.org. Detailed knowledge of cgroups is not required to use cgroups in Slurm, but a basic understanding of the following features of cgroups is helpful:
- Cgroup - a container for a set of processes subject to common controls or monitoring, implemented as a directory and a set of files (state objects) in the cgroup virtual filesystem.
- Subsystem - a module, typically a resource controller, that applies a set of parameters to the cgroups in a hierarchy.
- Hierarchy - a set of cgroups organized in a tree structure, with one or more associated subsystems.
- State Objects - pseudofiles that represent the state of a cgroup or
apply controls to a cgroup:
- tasks - identifies the processes (PIDs) in the cgroup.
- additional state objects specific to each subsystem.
General Usage Notes
- There can be a serious performance problem with memory cgroups on conventional multi-socket, multi-core nodes in kernels prior to 2.6.38 due to contention between processors for a spinlock. This problem seems to have been completely fixed in the 2.6.38 kernel.
- Debian and derivatives (e.g. Ubuntu) usually exclude the memory and memsw
(swap) cgroups by default. To include them, add the following parameters to
the kernel command line:
cgroup_enable=memory swapaccount=1
This can usually be placed in /etc/default/grub inside the GRUB_CMDLINE_LINUX variable. A command such as update-grub must be run after updating the file. - Linux allows you to use the JoinControllers parameters to have multiple controllers mounted in a single hierarchy, however Slurm does not work correctly with this configuration. Please make sure your system.conf does not use JoinControllers.
Use of Cgroups in Slurm
Slurm provides cgroup versions of a number of plugins.
- proctrack (process tracking)
- task (task management)
- jobacct_gather (job accounting statistics) The cgroup plugins can provide a number of benefits over the other more standard plugins, as described below.
Slurm also uses cgroups for resource specialization.
Slurm Cgroups Configuration Overview
There are several sets of configuration options for Slurm cgroups:
- slurm.conf provides options to enable the cgroup plugins. Each plugin may be enabled or disabled independently of the others.
- cgroup.conf provides general options that are common to all cgroup plugins, plus additional options that apply only to specific plugins.
- System-level resource specialization is enabled using node configuration parameters.
Currently Available Cgroup Plugins
proctrack/cgroup plugin
The proctrack/cgroup plugin is an alternative to other proctrack plugins such as proctrack/linux for process tracking and suspend/resume capability. proctrack/cgroup uses the freezer subsystem which is more reliable for tracking and control than proctrack/linux.
To enable this plugin, configure the following option in slurm.conf:
ProctrackType=proctrack/cgroup
There are no specific options for this plugin in cgroup.conf, but the general options apply. See the cgroup.conf man page for details.
task/cgroup plugin
The task/cgroup plugin is an alternative to other task plugins such as the task/affinity plugin for task management. task/cgroup provides the following features:
- The ability to confine jobs and steps to their allocated cpuset.
- The ability to bind tasks to sockets, cores and threads within their step's allocated cpuset on a node.
- Supports block and cyclic distribution of allocated cpus to tasks for binding.
- The ability to confine jobs and steps to specific memory resources.
- The ability to confine jobs to their allocated set of generic resources (gres devices).
The task/cgroup plugin uses the cpuset, memory and devices subsystems.
To enable this plugin, configure the following option in slurm.conf:
TaskPlugin=task/cgroup
There are many specific options for this plugin in cgroup.conf. The general options also apply. See the cgroup.conf man page for details.
jobacct_gather/cgroup plugin
The jobacct_gather/cgroup plugin is an alternative to the jobacct_gather/linux plugin for the collection of accounting statistics for jobs, steps and tasks. jobacct_gather/cgroup uses the cpuacct, memory and blkio subsystems. Note: the cpu and memory statistics collected by this plugin do not represent the same resources as the cpu and memory statistics collected by the jobacct_gather/linux plugin (sourced from /proc stat). While originally thought to be faster, in practice it has been proven to be slower than the jobacct_gather/linux plugin.
To enable this plugin, configure the following option in slurm.conf:
JobacctGatherType=jobacct_gather/cgroup
There are no specific options for this plugin in cgroup.conf, but the general options apply. See the cgroup.conf man page for details.
Use of Cgroups for Resource Specialization
Resource Specialization may be used to reserve a subset of cores on each compute node for exclusive use by the Slurm compute node daemons (slurmd, slurmstepd). It may also be used to apply a real memory limit to the daemons. The daemons are confined to the reserved cores using a special system cgroup in the cpuset hierarchy. The memory limit is enforced using a system cgroup in the memory hierarchy. System-level resource specialization is enabled with special node configuration parameters in slurm.conf and core specialization in core_spec.html.
Organization of Slurm Cgroups
Slurm cgroups are organized as follows. A base directory (mount point) is created at /sys/fs/cgroup, or as configured by the CgroupMountpoint option in cgroup.conf. All cgroup hierarchies are created below this base directory. A separate hierarchy is created for each cgroup subsystem in use. The name of the root cgroup in each hierarchy is the subsystem name. A cgroup named slurm is created below the root cgroup in each hierarchy. Below each slurm cgroup, cgroups for Slurm users, jobs, steps and tasks are created dynamically as needed. The names of these cgroups consist of a prefix identifying the Slurm entity (user, job, step or task), followed by the relevant numeric id. The following example shows the path of the task cgroup in the cpuset hierarchy for taskid#2 of stepid#0 of jobid#123 for userid#100, using the default base directory (/sys/fs/cgroup):
/cgroup/cpuset/slurm/uid_100/job_123/step_0/task_2
If resource specialization is configured, a special system cgroup is created below the slurm cgroup in the cpuset and memory hierarchies:
/sys/fs/cgroup/cpuset/slurm/system /sys/fs/cgroup/memory/slurm/system
Note that all these structures apply to a specific compute node. Jobs that use more than one node will have a cgroup structure on each node.
Last modified 16 June 2020