High Throughput Computing Administration Guide
This document contains Slurm administrator information specifically for high throughput computing, namely the execution of many short jobs. Getting optimal performance for high throughput computing does require some tuning and this document should help you off to a good start. A working knowledge of Slurm should be considered a prerequisite for this material.
Performance Results
Slurm has also been validated to execute 500 simple batch jobs per second on a sustained basis with short bursts of activity at a much higher level. Actual performance depends upon the jobs to be executed plus the hardware and configuration used.
System configuration
Several system configuration parameters may require modification to support a large number of open files and TCP connections with large bursts of messages. Changes can be made using the /etc/rc.d/rc.local or /etc/sysctl.conf script to preserve changes after reboot. In either case, you can write values directly into these files (e.g. "echo 32832 > /proc/sys/fs/file-max").
- /proc/sys/fs/file-max: The maximum number of concurrently open files. We recommend a limit of at least 32,832.
- /proc/sys/net/ipv4/tcp_max_syn_backlog: The maximum number of SYN requests to keep in memory that we have yet to get the third packet in a 3-way handshake from. The default value is 1024 for systems with more than 128Mb of memory, and 128 for low memory machines. If server suffers of overload, try to increase this number.
- /proc/sys/net/ipv4/tcp_syncookies: Used to send out syncookies to hosts when the kernels syn backlog queue for a specific socket is overflowed. The default value is 0, which disables this functionality. Set the value to 1.
- /proc/sys/net/ipv4/tcp_synack_retries: How many times to retransmit the SYN,ACK reply to an SYN request. In other words, this tells the system how many times to try to establish a passive TCP connection that was started by another host. This variable takes an integer value, but should under no circumstances be larger than 255. Each retransmission will take approximately 30 to 40 seconds. The default value of 5, which results in a timeout of passive TCP connections of approximately 180 seconds and is generally satisfactory.
- /proc/sys/net/core/somaxconn: Limit of socket listen() backlog, known in userspace as SOMAXCONN. Defaults to 128. The value should be raised substantially to support bursts of request. For example, to support a burst of 1024 requests, set somaxconn to 1024.
- /proc/sys/net/ipv4/ip_local_port_range: Identify the ephermeral ports available, which are used for many Slurm communications. The value may be raised to support a high volume of communications. For example, write the value "32768 65535" into the ip_local_port_range file in order to make that range of ports available.
The transmit queue length (txqueuelen) may also need to be modified
using the ifconfig command. A value of 4096 has been found to work well for one
site with a very large cluster
(e.g. "ifconfig
Munge configuration
By default the Munge daemon runs with two threads, but a higher thread count can improve its throughput. We suggest starting the Munge daemon with ten threads for high throughput support (e.g. "munged --num-threads 10").
User limits
The ulimit values in effect for the slurmctld daemon should be set quite high for memory size, open file count and stack size.
Slurm Configuration
Several Slurm configuration parameters should be adjusted to reflect the needs of high throughput computing. The changes described below will not be possible in all environments, but these are the configuration options that you may want to consider for higher throughput.
- AccountingStorageType: Disabling storing of accounting by using the accounting_storage/none plugin. Turning accounting off provides minimal improvement in performance. If using the SlurmDBD increased speedup can be achieved by setting the CommitDelay option in the slurmdbd.conf
- JobAcctGatherType: Disabling the collection of job accounting information will improve job throughput. Disable collection of accounting by using the jobacct_gather/none plugin.
- JobCompType: Disabling recording of job completion information will improve job throughput. Disable recording of job completion information by using the jobcomp/none plugin.
- MaxJobCount: Controls how many jobs may be in the slurmctld daemon records at any point in time (pending, running, suspended or completed[temporarily]). The default value is 10,000.
- MessageTimeout: Controls how long to wait for a response to messages. The default value is 10 seconds. While the slurmctld daemon is highly threaded, its responsiveness is load dependent. This value might need to be increased somewhat.
- MinJobAge: Controls how soon the record of a completed job can be purged from the slurmctld memory and thus not visible using the squeue command. The record of jobs run will be preserved in accounting records and logs. The default value is 300 seconds. The value should be reduced to a few seconds if possible. Use of accounting records for older jobs can increase the job throughput rate compared with retaining old jobs in the memory of the slurmctld daemon.
- PriorityType: The priority/builtin is considerably faster than other options, but schedules jobs only on a First In First Out (FIFO) basis.
- SchedulerParameters:
Many scheduling parameters are available.
- Setting option batch_sched_delay will control how long the scheduling of batch jobs can be delayed. This effects only batch jobs. For example, if many jobs are submitted each second, the overhead of trying to schedule each one will adversely impact the rate at which jobs can be submitted. The default value is 3 seconds.
- Setting option defer will avoid attempting to schedule each job individually at job submit time, but defer it until a later time when scheduling multiple jobs simultaneously may be possible. This option may improve system responsiveness when large numbers of jobs (many hundreds) are submitted at the same time, but it will delay the initiation time of individual jobs.
- sched_min_interval is yet another configuration parameter to control how frequently the scheduling logic runs. It can still be triggered on each job submit, job termination, or other state change which could permit a new job to be started. However that triggering does not cause the scheduling logic to be started immediately, but only within the configured sched_interval. For example, if sched_min_interval=2000000 (microseconds) and 100 jobs are submitted within a 2 second time window, then the scheduling logic will be executed one time rather than 100 times if sched_min_interval was set to 0 (no delay).
- Besides controlling how frequently the scheduling logic is executed, the default_queue_depth configuration parameter controls how many jobs are considered to be started in each scheduler iteration. The default value of default_queue_depth is 100 (jobs), which should be fine in most cases.
- The sched/backfill plugin has relatively high overhead if used with large numbers of job. Configuring bf_max_job_test to a modest size (say 100 jobs or less) and bf_interval to 30 seconds or more will limit the overhead of backfill scheduling (NOTE: the default values are fine for both of these parameters). Other backfill options available for tuning backfill scheduling include bf_max_job_user, bf_resolution and bf_window. See the slurm.conf man page for details.
- A set of scheduling parameters currently used for running hundreds of jobs per second on a sustained basis on one cluster follows. Note that every environment is different and this set of parameters will not work well in every case, but it may serve as a good starting point.
- assoc_limit_continue
- batch_sched_delay=20
- bf_continue
- bf_interval=300
- bf_min_age_reserve=10800
- bf_resolution=600
- bf_yield_interval=1000000
- partition_job_depth=500
- sched_max_job_start=200
- sched_min_interval=2000000
- SchedulerType: If most jobs are short lived then use of the sched/builtin plugin is recommended. This manages a queue of jobs on a First-In-First-Out (FIFO) basis and eliminates logic used to sort the queue by priority.
- SlurmctldPort: It is desirable to configure the slurmctld daemon to accept incoming messages on more than one port in order to avoid having incoming messages discarded by the operating system due to exceeding the SOMAXCONN limit described above. Using between two and ten ports is suggested when large numbers of simultaneous requests are to be supported.
- PrologSlurmctld/EpilogSlurmctld: Neither of these is recommended for a high throughput environment. When they are enabled a separate slurmctld thread has to be created for every job start (or task for a job array). Current architecture requires acquisition of a job write lock in every thread, which is a costly operation that severely limits scheduler throughput.
- SlurmctldDebug: More detailed logging will decrease system throughput. Set to 2 (log errors only) or 3 (general information logging). Each increment in the logging level will increase the number of message by a factor of about 3.
- SlurmdDebug: More detailed logging will decrease system throughput. Set to 2 (log errors only) or 3 (general information logging). Each increment in the logging level will increase the number of message by a factor of about 3.
- SlurmdLogFile: Writing to local storage is recommended.
- TaskPlugin: Avoid using task/cgroup with the combination of ConstrainRAMSpace it is slower than other alternatives. On the same note task/affinity does not appear to add any measurable overhead. Using task/affinity for affinity is advised in any case.
- Other: Configure logging, accounting and other overhead to a minimum appropriate for your environment.
SlurmDBD Configuration
Turning accounting off provides a minimal improvement in performance. If using SlurmDBD increased speedup can be achieved by setting the CommitDelay option in the slurmdbd.conf
You might also consider setting the 'Purge*' options in your slurmdbd.conf to clear out old Data. A Typical configuration would look like this...
- PurgeEventAfter=12months
- PurgeJobAfter=12months
- PurgeResvAfter=2months
- PurgeStepAfter=2months
- PurgeSuspendAfter=1month
- PurgeTXNAfter=12months
- PurgeUsageAfter=12months
Last modified 5 December 2018