Difference between revisions of "Slurm"

From CAC Documentation wiki
Jump to navigation Jump to search
(→‎Running Jobs: Added Job Scripts with Simple Job Script)
(→‎Running Jobs: Added Example Command-line Job Submission, added an explanation to Simple Job Script, and minor edits)
Line 17: Line 17:
 
=== Display Info ===
 
=== Display Info ===
  
 +
Common commands used to display information:
 
:* <code>sinfo</code> displays information about nodes and partitions/queues.  Use <code>-l</code> for more detailed information.
 
:* <code>sinfo</code> displays information about nodes and partitions/queues.  Use <code>-l</code> for more detailed information.
 
:* <code>scontrol show nodes</code> views the state of the nodes.
 
:* <code>scontrol show nodes</code> views the state of the nodes.
Line 23: Line 24:
 
=== Job Control ===
 
=== Job Control ===
  
:* <code>sbatch testjob.sh</code> submits a job where testjob.sh is the script you want to run.  Also see the [https://slurm.schedmd.com/sbatch.html sbatch] documentation.
+
Here are some common Job Control commands:
 +
:* <code>sbatch testjob.sh</code> submits a job where testjob.sh is the script you want to run.  Also see the [[#Job Scripts|Job Scripts]] section and the [https://slurm.schedmd.com/sbatch.html sbatch] documentation.
 
:* <code>srun -p <partition> --pty /bin/bash</code> starts an interactive job.  Also see the [https://slurm.schedmd.com/srun.html srun] documentation.
 
:* <code>srun -p <partition> --pty /bin/bash</code> starts an interactive job.  Also see the [https://slurm.schedmd.com/srun.html srun] documentation.
 
:* <code>squeue -u my_userid</code> shows state of jobs for user my_userid.  Also see the [https://slurm.schedmd.com/squeue.html squeue] documentation.
 
:* <code>squeue -u my_userid</code> shows state of jobs for user my_userid.  Also see the [https://slurm.schedmd.com/squeue.html squeue] documentation.
Line 45: Line 47:
 
  | align="center" | <tt>-t</tt>
 
  | align="center" | <tt>-t</tt>
 
  | align="center" | hh:mm:ss
 
  | align="center" | hh:mm:ss
  | align="center" | <code>-t 00:05:00</code> (5 minutes)
+
  | align="center" | <tt>-t 00:05:00</tt> (5 minutes)
 
|-
 
|-
 
  | Number of tasks/processes
 
  | Number of tasks/processes
 
  | align="center" | <tt>-n</tt>
 
  | align="center" | <tt>-n</tt>
 
  | align="center" | 1 ... (nodes * max tasks/processes per node)
 
  | align="center" | 1 ... (nodes * max tasks/processes per node)
  | align="center" | <code>-n 16</code> (16 tasks/processes)
+
  | align="center" | <tt>-n 16</tt> (16 tasks/processes)
 
  |-
 
  |-
 
  | Submission Queue
 
  | Submission Queue
 
  | align="center" | <tt>-p</tt>
 
  | align="center" | <tt>-p</tt>
 
  | align="center" | Queue Name
 
  | align="center" | Queue Name
  | align="center" | <code>-p</code> (normal queue/partition)
+
  | align="center" | <tt>-p</tt> (normal queue/partition)
 
  |-
 
  |-
 
  |}
 
  |}
 +
 +
==== Example Command-line Job Submission ====
 +
 +
All of the required options can be specified on the command-line with <code>sbatch</code>.  For example, say you had the following script "simple_cmd.sh" to run:
 +
<pre>
 +
#!/bin/bash
 +
#Ensures that the node can sleep
 +
 +
#print date
 +
date
 +
#verify that sleep 5 works
 +
time sleep 5
 +
</pre>
 +
In order to run this on the command-line, you could issue (where <tt>development</tt> is an available queue on the system):
 +
<pre>
 +
$  sbatch -p development -t 00:01:00 -n 1 simple_cmd.sh
 +
</pre>
 +
 +
There is also an easier way, as demonstrated in [[#Job Scripts|Job Scripts]].
  
 
=== Job Scripts ===
 
=== Job Scripts ===
Line 63: Line 84:
 
==== Simple Job Script ====
 
==== Simple Job Script ====
  
 +
For the same [[#Example Command-line Job Submission| example from above]], the same commands can be put into the batch script itself.  This makes it easy to copy and paste to new scripts as well as be confident that a job is submitted the same way over and over again. We'll modify the previous script so that it includes all of the required directives.
 +
 +
All that is required is to place the command line options in the batch script and prepend them with #SBATCH. They appear as comments to the shell, but Slurm parses them for you and applies them.  Here is the end result:
 
<pre>
 
<pre>
 
#!/bin/bash
 
#!/bin/bash

Revision as of 17:24, 3 June 2019

Some of the CAC's Private Clusters are managed with OpenHPC, which includes the Slurm Workload Manager (Slurm for short). Slurm (Simple Linux Utility for Resource Management) is a group of utilities used for managing workloads on compute clusters.

This page is intended to give users an overview of Slurm. Some of the information on this page has been adapted from the Cornell Virtual Workshop topics on the Stampede2 Environment and Advanced Slurm. For a more in-depth tutorial, please review these topics directly.

Overview

Some clusters use Slurm as the batch queuing system and the scheduling mechanism. This means that jobs are submitted to Slurm from a login node and Slurm handles scheduling these jobs on nodes as resources becomes available. Users submit jobs to the batch component which is responsible for maintaining one or more queues (also known as "partitions"). These jobs include information about themselves as well as a set of resource requests. Resource requests include anything from the number of CPUs or nodes to specific node requirements (e.g. only use nodes with > 2GB RAM). A separate component, called the scheduler, is responsible for figuring out when and where these jobs can be run on the cluster. The scheduler needs to take into account the priority of the job, any reservations that may exist, when currently running jobs are likely to end, etc. Once informed of scheduling information, the batch system will handle starting your job at the appropriate time and place. Slurm handles both of these components, so you don't have to think of them as separate processes, you just need to know how to submit jobs to the batch queue(s).

Note: Refer to the documentation for your cluster to determine what queues/partitions are available.

Running Jobs

Many of the following commands have many options. For full details, see the Slurm Docs.

Display Info

Common commands used to display information:

  • sinfo displays information about nodes and partitions/queues. Use -l for more detailed information.
  • scontrol show nodes views the state of the nodes.
  • scontrol show partition views the state of the partition/queue.

Job Control

Here are some common Job Control commands:

  • sbatch testjob.sh submits a job where testjob.sh is the script you want to run. Also see the Job Scripts section and the sbatch documentation.
  • srun -p <partition> --pty /bin/bash starts an interactive job. Also see the srun documentation.
  • squeue -u my_userid shows state of jobs for user my_userid. Also see the squeue documentation.
  • scontrol show job <job id> views the state of a job. Also see the scontrol documentation.
  • scancel <job id> cancels a job. Also see the scancel documentation.
  • squeue with no arguments retrieves summary information on all jobs scheduled.

Once the job has completed, the stdout and stderr streams will be put in your $HOME directory named with the job id. To verify the job ran successfully, examine these output files.

Required Arguments

The following table shows required directives that must be provided with each job. Note that the max number of tasks/processes is dependent on the system you are running on, and some systems may have more required arguments.

Meaning Flag Value Example
Job Walltime -t hh:mm:ss -t 00:05:00 (5 minutes)
Number of tasks/processes -n 1 ... (nodes * max tasks/processes per node) -n 16 (16 tasks/processes)
Submission Queue -p Queue Name -p (normal queue/partition)

Example Command-line Job Submission

All of the required options can be specified on the command-line with sbatch. For example, say you had the following script "simple_cmd.sh" to run:

#!/bin/bash
#Ensures that the node can sleep

#print date
date
#verify that sleep 5 works
time sleep 5

In order to run this on the command-line, you could issue (where development is an available queue on the system):

$  sbatch -p development -t 00:01:00 -n 1 simple_cmd.sh

There is also an easier way, as demonstrated in Job Scripts.

Job Scripts

Simple Job Script

For the same example from above, the same commands can be put into the batch script itself. This makes it easy to copy and paste to new scripts as well as be confident that a job is submitted the same way over and over again. We'll modify the previous script so that it includes all of the required directives.

All that is required is to place the command line options in the batch script and prepend them with #SBATCH. They appear as comments to the shell, but Slurm parses them for you and applies them. Here is the end result:

#!/bin/bash
#Ensures that the node can sleep

#SBATCH -t 00:05:00
#SBATCH -n 1
#SBATCH -p development

#print date
date
#verify that sleep 5 works
time sleep 5

References