Why use a Cluster?
Overview
Teaching: 10 min
Exercises: 0 minQuestions
Why would I be interested in High Performance Computing (HPC)?
What can I expect to learn from this course?
Objectives
Describe what an HPC system is
Identify how an HPC system could benefit you.
Frequently, research problems that use computing can outgrow the capabilities of the desktop or laptop computer where they started:
-
A statistics student wants to cross-validate a model. This involves running the model 1000 times – but each run takes an hour. Running on their laptop will take over a month! The final results are calculated after all 1000 models have run, but due to limited power of their laptop, the student typically only runs one model at a time (in serial). Since each model is independent, it’s theoretically possible to run them all at once (in parallel).
-
A genomics researcher has been using small sets of sequence data, but soon will be receiving a dataset that is 10 times larger. It’s already challenging to open the datasets on their computer – analyzing the larger dataset will probably crash it. In order to solve this research problem, a computer with more memory would be required to analyze the much larger future dataset.
-
An engineer is using a fluid dynamics package that has an option to run in parallel. In going from 2D to 3D simulations, the simulation time has more than tripled. They have tried the parallel option using the 8 cores on their desktop, but it slows down their computer so much that they can’t use it for anything else and the run time is still very long. If they had access to a computer (or multiple computers) with more cores, they could run their simulation more quickly.
In all these cases, access to more (and larger) computers is needed. Those computers should be usable at the same time, solving many researchers’ problems in parallel.
Jargon Busting Presentation
Open the HPC Jargon Buster
in a new tab. To present the content, press C
to open a clone in a
separate window, then press P
to toggle presentation mode.
Key Points
High Performance Computing (HPC) typically involves connecting to very large computing systems elsewhere in the world.
These other systems can be used to do work that would either be impossible or much slower on smaller systems.
HPC resources are shared by multiple users.
The standard method of interacting with such systems is via a command line interface.
Connecting to a remote HPC system
Overview
Teaching: 10 min
Exercises: 5 minQuestions
How do I log in to a remote HPC system?
Objectives
Configure secure access to a remote HPC system.
Connect to a remote HPC system.
Secure Connections
The first step in using a cluster is to establish a connection from our laptop to the cluster. When we are sitting at a computer, we have come to expect a visual display with icons, widgets, and perhaps some windows or applications: a graphical user interface, or GUI. Since computer clusters are remote resources that we connect to over slow or intermittent interfaces (WiFi and VPNs especially), it is more practical to use a command-line interface, or CLI, to send commands as plain-text. If a command returns output, it is printed as plain text as well. The commands we run today will not open a window to show graphical results.
If you have already taken The Carpentries’ courses on the UNIX Shell or Version Control, you have used the CLI on your local machine extensively. The only leap to be made here is to open a CLI on a remote machine, while taking some precautions so that other folks on the network can’t see (or change) the commands you’re running or the results the remote machine sends back. We will use the Secure SHell protocol (or SSH) to open an encrypted network connection between two machines, allowing you to send & receive text and data without having to worry about prying eyes.
SSH clients are usually command-line tools, where you provide the remote
machine address as the only required argument. If your username on the remote
system differs from what you use locally, you must provide that as well. If
your SSH client has a graphical front-end, such as PuTTY or MobaXterm, you will
set these arguments before clicking “connect.” From the terminal, you’ll write
something like ssh userName@hostname
, where the argument is just like an
email address: the “@” symbol is used to separate the personal ID from the
address of the remote machine.
Connect to the Cluster
After getting your account, please navigate to ondemand.boisestate.edu, choose your university as the identity provider, and log in with your university credentials.
From here you can acccess the shell, interactive apps like Jupyter and Rstudio, and transfer files.
Logging in with a Terminal Application
The Lesson Setup provides instructions for installing a shell application with SSH. If you have not done so already, please open that shell application with a Unix-like command line interface to your system.
To log in to Borah via ssh, we’ll need to set up ssh keys according to the follwing documentation: Logging in Documentation
You may have noticed that the prompt changed when you logged into the remote
system using the terminal. This change is important because it can help you
distinguish on which system the commands you type will be run when you pass
them into the terminal. This change is also a small complication that we will
need to navigate throughout the workshop. Exactly what is displayed as the
prompt (which conventionally ends in $
) in the terminal when it is connected
to the local system and the remote system will typically be different for
every user. We still need to indicate which system we are entering commands
on though so we will adopt the following convention:
[you@laptop:~]$
when the command is to be entered on a terminal connected to your local computer[yourUsername@borah-login ~]$
when the command is to be entered on a terminal connected to the remote system$
when it really doesn’t matter which system the terminal is connected to.
Looking Around Your Remote Home
Very often, many users are tempted to think of a high-performance computing
installation as one giant, magical machine. Sometimes, people will assume that
the computer they’ve logged onto is the entire computing cluster. So what’s
really happening? What computer have we logged on to? The name of the current
computer we are logged onto can be checked with the hostname
command. (You
may also notice that the current hostname is also part of our prompt!)
[yourUsername@borah-login ~]$ hostname
borah-login
So, we’re definitely on the remote machine. Next, let’s find out where we are
by running pwd
to print the working directory.
[yourUsername@borah-login ~]$ pwd
/bsuhome/yourUsername
Great, we know where we are! Let’s see what’s in our current directory:
[yourUsername@borah-login ~]$ ls
scratch
The system administrators have configured your home directory with a link (a shortcut) to a scratch space reserved for you. You can also include hidden files in your directory listing:
[yourUsername@borah-login ~]$ ls -a
. .bashrc scratch
.. .ssh
In the first column, .
is a reference to the current directory and ..
a
reference to its parent (/bsuhome
). You may or may not see
the other files, or files like them: .bashrc
is a shell configuration file,
which you can edit with your preferences; and .ssh
is a directory storing SSH
keys and a record of authorized connections.
Key Points
An HPC system is a set of networked machines.
HPC systems typically provide login nodes and a set of worker nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).
Files saved on one node are available on all nodes.
Exploring Remote Resources
Overview
Teaching: 20 min
Exercises: 10 minQuestions
How does my local computer compare to the remote systems?
How does the login node compare to the compute nodes?
Are all compute nodes alike?
Objectives
Survey system resources using
nproc
,free
, and the queuing systemCompare & contrast resources on the local machine, login node, and worker nodes
Learn about the various filesystems on the cluster using
df
Find out
who
else is logged inAssess the number of idle and occupied nodes
Look Around the Remote System
Take a look at your home directory on the remote system:
[yourUsername@borah-login ~]$ ls
What’s different between your machine and the remote?
Open a second terminal window on your local computer and run the
ls
command (without logging in to Borah). What differences do you see?Solution
You would likely see something more like this:
[you@laptop:~]$ ls
Applications Documents Library Music Public Desktop Downloads Movies Pictures
The remote computer’s home directory shares almost nothing in common with the local computer: they are completely separate systems!
Most high-performance computing systems run the Linux operating system, which
is built around the UNIX Filesystem Hierarchy Standard. Instead of
having a separate root for each hard drive or storage medium, all files and
devices are anchored to the “root” directory, which is /
:
[yourUsername@borah-login ~]$ ls /
bin etc lib64 proc sbin sys var
boot bsuhome mnt root scratch tmp working
dev lib opt run srv usr
The “bsuhome” directory is the one where we generally want to keep all of our files. Other folders on a UNIX OS contain system files and change as you install new software or upgrade your OS.
Using HPC filesystems
On HPC systems, you have a number of places where you can store your files. These differ in both the amount of space allocated and whether or not they are backed up.
- Home – a network filesystem, data stored here is available throughout the HPC system, and is backed up periodically; however, users are limited on how much they can store.
- Scratch – also a network filesystem, which has more space available than the Home directory, but it is not backed up, and should not be used for long term storage.
You can also explore the available filesystems using df
to show disk
free space. The -h
flag renders the sizes in a human-friendly format,
i.e., GB instead of B. The type flag -T
shows what kind of filesystem
each resource is.
[yourUsername@borah-login ~]$ df -Th
Different results from
df
- The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on).
- Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include yourUsername, depending on how it is mounted.
Shared Filesystems
This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!
Nodes
Recall that the individual computers that compose a cluster are called nodes. On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the login node. A login node serves as the access point to the cluster for all users.
As a gateway, the login node should not be used for time-consuming or resource-intensive tasks as consuming the cpu or memory of the login node would slow down the cluster for everyone! It is well suited for uploading and downloading files, minor software setup, and submitting jobs to the scheduler. Generally speaking, in these lessons, we will avoid running jobs on the login node.
Who else is logged in to the login node?
[yourUsername@borah-login ~]$ who
This may show only your user ID, but there are likely several other people (including fellow learners) connected right now.
The real work on a cluster gets done by the compute (or worker) nodes. compute nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.
All interaction with the compute nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Slurm). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the compute nodes.
For example, we can view all of the compute nodes by running the command
sinfo
.
[yourUsername@borah-login ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
bigmem up infinite 1 alloc himem101
bsudfq* up infinite 7 mix cpu[101,110-112,116-118]
bsudfq* up infinite 11 alloc cpu[102-109,113-115]
bsudfq* up infinite 22 idle cpu[119-140]
gpu up infinite 4 idle gpu[101-104]
short up 2-00:00:00 7 mix cpu[101,110-112,116-118]
short up 2-00:00:00 32 alloc cpu[102-109,113-115,141,150-169]
short up 2-00:00:00 34 idle cpu[119-140,142-149,170-172,214]
shortgpu up 7-00:00:00 5 idle gpu[101-105]
A lot of the nodes are busy running work for other users: we are not alone here!
There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.
What’s in a Node?
All of the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside of it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

Explore Your Computer
Try to find out the number of CPUs and amount of memory available on your personal computer.
Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type Ctrl+d or
exit
:[yourUsername@borah-login ~]$ exit [you@laptop:~]$
Solution
There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:
- Run system utilities
[you@laptop:~]$ nproc --all [you@laptop:~]$ free -h
- Read from
/proc
[you@laptop:~]$ cat /proc/cpuinfo [you@laptop:~]$ cat /proc/meminfo
- (Or on mac) Run system_profiler
[you@laptop:~]$ system_profiler SPHardwareDataType
Explore a Worker Node
Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, number of CPUs, and memory (in MB) available on the worker nodes:
[yourUsername@borah-login ~]$ sinfo -n cpu101 -o "%n %c %m"
Compare Your Computer and the Compute Node
Compare your laptop’s number of processors and memory with the numbers you see on the cluster compute node. What implications do you think the differences might have on running your research work on the different systems and nodes?
Solution
Compute nodes are usually built with processors that have higher core-counts than the login node or personal computers in order to support highly parallel tasks. Compute nodes usually also have substantially more memory (RAM) installed than a personal computer. More cores tends to help jobs that depend on some work that is easy to perform in parallel, and more, faster memory is key for large or complex numerical tasks.
Differences Between Nodes
Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphics Processing Units (GPUs or “video cards”).
With all of this in mind, we will now cover how to talk to the cluster’s scheduler and use it to start running our scripts and programs!
Key Points
An HPC system is a set of networked machines.
HPC systems typically provide login nodes and a set of compute nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).
Files saved on shared storage are available on all nodes.
The login node is a shared machine: be considerate of other users.
Scheduler Fundamentals
Overview
Teaching: 30 min
Exercises: 30 minQuestions
What is a scheduler and why does a cluster need one?
How do I launch a program to run on a compute node in the cluster?
How do I capture the output of a program that is run on a node in the cluster?
Objectives
Submit a simple script to the cluster.
Monitor the execution of jobs using command line tools.
Inspect the output and error files of your jobs.
Use compute nodes interactively for resource intensive tasks.
Job Scheduler
An HPC system might have thousands of nodes and users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.
The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job does not start instantly as in your laptop.

The scheduler used in this lesson is Slurm. Although Slurm is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.
Running a Batch Job
The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.
In this case, the job we want to run is a shell script – essentially a text file containing a list of UNIX commands to be executed in a sequential manner. Our shell script will have three parts:
- On the very first line, add
#!/usr/bin/env bash
. The#!
(pronounced “hash-bang” or “shebang”) tells the computer what program is meant to process the contents of this file. In this case, we are telling it that the commands that follow are written for the command-line shell (what we’ve been doing everything in so far). - Anywhere below the first line, we’ll add an
echo
command with a friendly greeting. When run, the shell script will print whatever comes afterecho
in the terminal.echo -n
will print everything that follows, without ending the line by printing the new-line character.
- On the last line, we’ll invoke the
hostname
command, which will print the name of the machine the script is run on.
First open your new script in a text editor:
[yourUsername@borah-login ~]$ nano example-job.sh
#!/usr/bin/env bash
echo -n "This script is running on "
hostname
Creating Our Test Job
Run the script. Does it execute on the cluster or just our login node?
Solution
[yourUsername@borah-login ~]$ bash example-job.sh
This script is running on borah-login
This script ran on the login node, but we want to take advantage of
the compute nodes: we need the scheduler to queue up example-job.sh
to run on a compute node.
To submit this task to the scheduler, we use the
sbatch
command.
This creates a job which will run the script when dispatched to
a compute node which the queuing system has identified as being
available to perform the work.
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
Submitted batch job 36855
And that’s all we need to do to submit a job. Our work is done – now the
scheduler takes over and tries to run the job for us. While the job is waiting
to run, it goes into a list of jobs called the queue. To check on our job’s
status, we check the queue using the command
squeue --me
.
[yourUsername@borah-login ~]$ squeue --me
JOBID PARTITION NAME ST TIME NODES NODELIST(REASON)
36855 bsudfq example- R 0:05 1 cpu116
We can see all the details of our job, most importantly that it is in the R
or RUNNING
state. Sometimes our jobs might need to wait in a queue
(PENDING
) or have an error (E
).
Where’s the Output?
On the login node, this script printed output to the terminal – but now, when
squeue
shows the job has finished, nothing was printed to the terminal.Cluster job output is typically redirected to a file in the directory you launched it from. Use
ls
to find andcat
to read the file.
Customising a Job
The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.
Comments in UNIX shell scripts (denoted by #
) are typically ignored, but
there are exceptions. For instance the special #!
comment at the beginning of
scripts specifies what program should be used to run it (you’ll typically see
#!/usr/bin/env bash
). Schedulers like Slurm also
have a special comment used to denote special scheduler-specific options.
Though these comments differ from scheduler to scheduler,
Slurm’s special comment is #SBATCH
. Anything
following the #SBATCH
comment is interpreted as an
instruction to the scheduler.
Let’s illustrate this by example. By default, a job’s name is the name of the
script, but the -J
option can be used to change the
name of a job. Add an option to the script:
[yourUsername@borah-login ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -J hello-world
echo -n "This script is running on "
hostname
Submit the job and monitor its status:
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
[yourUsername@borah-login ~]$ squeue --me
JOBID PARTITION NAME ST TIME NODES NODELIST(REASON)
212202 bsudfq hello-wo R 0:02 1 cpu101
Fantastic, we’ve successfully changed the name of our job!
Resource Requests
What about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.
The following are several key resource requests:
-
--cpus-per-task=<ncpus>
or-c <ncpus>
: How many CPU cores does each task need? Useful for applications using shared memory parallelism, e.g., OpenMP. -
--ntasks=<ntasks>
or-n <ntasks>
: How many tasks will your job use? Useful for applications using distributed parallelism, e.g., MPI. -
--time <days-hours:minutes:seconds>
or-t <days-hours:minutes:seconds>
: How much real-world time (walltime) will your job take to run? The<days>
part can be omitted. -
--mem=<megabytes>
: How much memory on a node does your job need in megabytes? You can also specify gigabytes using by adding a little “g” afterwards (example:--mem=5g
) -
--gres=gpu:<ngpu>
: How many gpus does your job need? Make sure you are submitting to a partition with gpus, e.g.--partition=gpu
or--partition=shortgpu
. -
--nodes=<nnodes>
or-N <nnodes>
: How many separate machines does your job need to run on? Note that if you setntasks
to a number greater than what one machine can offer, Slurm will set this value automatically.
Note that just requesting these resources does not make your job run faster, nor does it necessarily mean that you will consume all of these resources. It only means that these are made available to you. Your job may end up using less memory, or less time, or fewer nodes than you have requested, and it will still run.
It’s best if your requests accurately reflect your job’s requirements. We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.
Submitting Resource Requests
Modify our
hostname
script so that it runs for a minute, then submit a job for it on the cluster.Solution
[yourUsername@borah-login ~]$ cat example-job.sh
#!/usr/bin/env bash #SBATCH -t 00:01 # timeout in HH:MM echo -n "This script is running on " sleep 20 # time in seconds hostname
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
Why are the Slurm runtime and
sleep
time not identical?
Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use wall time as an example. We will request 1 minute of wall time and attempt to run a job for two minutes.
[yourUsername@borah-login ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -J long_job
#SBATCH -t 00:01 # timeout in HH:MM
echo "This script is running on ... "
sleep 240 # time in seconds
hostname
Submit the job and wait for it to finish. Once it is has finished, check the log file.
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
[yourUsername@borah-login ~]$ squeue --me
[yourUsername@borah-login ~]$ cat slurm-38193.out
This job is running on ...
slurmstepd: error: *** JOB 38193 ON cpu116 CANCELLED AT
2023-03-28T16:35:48 DUE TO TIME LIMIT ***
Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, Slurm will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.
Cancelling a Job
Sometimes we’ll make a mistake and need to cancel a job. This can be done with
the scancel
command. Let’s submit a job and then cancel it using
its job number (remember to change the walltime so that it runs long enough for
you to cancel it before it is killed!).
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
[yourUsername@borah-login ~]$ squeue --me
Submitted batch job 212203
JOBID PARTITION NAME ST TIME NODES NODELIST(REASON)
212203 bsudfq hello-wo R 0:03 1 cpu101
Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.
[yourUsername@borah-login ~]$ scancel 212203
# It might take a minute for the job to disappear from the queue...
[yourUsername@borah-login ~]$ squeue --me
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
Cancelling multiple jobs
We can also cancel all of our jobs at once using the
--me
option. This will delete all jobs for a specific user (in this case, yourself). Note that you can only delete your own jobs.Try submitting multiple jobs and then cancelling them all.
Solution
First, submit a trio of jobs:
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh [yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh [yourUsername@borah-login ~]$ sbatch --partition=bsudfq example-job.sh
Then, cancel them all:
[yourUsername@borah-login ~]$ scancel --me
Interactive jobs
Up to this point, we’ve focused on running jobs in batch mode. Slurm also provides the ability to start an interactive session.
There are very frequently tasks that need to be done interactively. Creating an
entire job script might be overkill, but the amount of resources required is
too much for a login node to handle. A good example of this might be building a
genome index for alignment with a tool like HISAT2. Fortunately, we
can run these types of tasks using dev-session
, a shortcut
which opens a bash terminal on a compute node.
[yourUsername@borah-login ~]$ dev-session
You should be presented with a bash prompt. Note that the prompt will likely
change to reflect your new location, in this case the compute node we are
logged on. You can also verify this with hostname
.
When you are done with the interactive job, type exit
or ctrl +
D to quit your session.
You can also use many of the interactive apps on ondemand.boisestate.edu.
Key Points
The scheduler handles how compute resources are shared between users.
A job is just a shell script.
Request slightly more resources than you will need.
Coffee Break
Overview
Teaching: 0 min
Exercises: 0 minQuestions
Objectives

Key Points
Accessing software via Modules
Overview
Teaching: 20 min
Exercises: 10 minQuestions
How do we load and unload software packages?
Objectives
Load and use a software package.
Explain how the shell environment changes when the module mechanism loads or unloads packages.
On a high-performance computing system, it is seldom the case that the software we want to use is available when we log in. It is installed, but we will need to “load” it before it can run.
Before we start using individual software packages, however, we should understand the reasoning behind this approach. The three biggest factors are:
- software incompatibilities
- versioning
- dependencies
Software incompatibility is a major headache for programmers. Sometimes the
presence (or absence) of a software package will break others that depend on
it. Two well known examples are Python and C compiler versions.
Python 3 famously provides a python
command that conflicts with that provided
by Python 2. Software compiled against a newer version of the C libraries and
then run on a machine that has older C libraries installed will result in a
nasty 'GLIBCXX_3.4.20' not found
error.
Software versioning is another common issue. A team might depend on a certain package version for their research project - if the software version was to change (for instance, if a package was updated), it might affect their results. Having access to multiple software versions allows a set of researchers to prevent software versioning issues from affecting their results.
Dependencies are where a particular software package (or even a particular version) depends on having access to another software package (or even a particular version of another software package). For example, the VASP materials science software may depend on having a particular version of the FFTW (Fastest Fourier Transform in the West) software library available for it to work.
Environment Modules
Environment modules are the solution to these problems. A module is a self-contained description of a software package – it contains the settings required to run a software package and, usually, encodes required dependencies on other software packages.
There are a number of different environment module implementations commonly
used on HPC systems: the two most common are TCL modules and Lmod. Both of
these use similar syntax and the concepts are the same so learning to use one
will allow you to use whichever is installed on the system you are using. In
both implementations the module
command is used to interact with environment
modules. An additional subcommand is usually added to the command to specify
what you want to do. For a list of subcommands you can use module -h
or
module help
. As for all commands, you can access the full help on the man
pages with man module
.
On login you may start out with a default set of modules loaded or you may start out with an empty environment; this depends on the setup of the system you are using.
Listing Available Modules
To see available software modules, use module avail
:
[yourUsername@borah-login ~]$ module avail
---------------------------- /cm/shared/modulefiles ---------------------------
abyss/2.3.1 vim/9.0.2149
adolc/2.6.3/gcc/12.1.0 wgrib2/3.1.3/gcc/12.1.0
afni/23.2.11 wps/intel/3.8.1
agisoft/2.1.0 wps/intel/4.1.2
alphafold/2(default) wrf-hydro/4.1.2
alphafold/2.3.2 wrf/intel/3.8.1
alphafold/3.0.0 wrf/intel/4.1.2
alpine3d/3.2.0.c3aaad0/openmpi/4.1.3/gcc/12.1.0 zlib/intel/1.2.11
[removed most of the output here for clarity]
Listing Currently Loaded Modules
You can use the module list
command to see which modules you currently have
loaded in your environment. If you have no modules loaded, you will see a
message telling you so
[yourUsername@borah-login ~]$ module list
Currently Loaded Modulefiles:
1) slurm/slurm/23.02.7
Loading and Unloading Software
To load a software module, use module load
. In this example we will use
Python 3.
Initially, Python 3 is not loaded. We can test this by using the which
command. which
looks for programs the same way that Bash does, so we can use
it to tell us where a particular piece of software is stored.
[yourUsername@borah-login ~]$ which python3
If the python3
command was unavailable, we would see output like
/usr/bin/which: no python3 in (/cm/shared/apps/slurm/current/sbin:/cm/shared/apps/slurm/current/bin:/cm/local/apps/gcc/9.2.0/bin:/cm/local/apps/environment-modules/4.4.0//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/sbin:/usr/sbin:/cm/local/apps/environment-modules/4.4.0/bin:/opt/dell/srvadmin/bin:/bsuhome/yourUsername/.local/bin:/bsuhome/yourUsername/bin)
Note that this wall of text is really a list, with values separated
by the :
character.
However, in our case we do have an existing python3
available so we see
/usr/bin/python3
We need a different Python than the system provided one though, so let us load a module to access it.
We can load the python3
command with module load
:
[yourUsername@borah-login ~]$ module load python/3.9.7
[yourUsername@borah-login ~]$ which python3
/cm/local/apps/python3/bin/python
So, what just happened?
To understand the output, first we need to understand the nature of the $PATH
environment variable. $PATH
is a special environment variable that controls
where a UNIX system looks for software. Specifically $PATH
is a list of
directories (separated by :
) that the OS searches through for a command
before giving up and telling us it can’t find it. As with all environment
variables we can print it out using echo
.
[yourUsername@borah-login ~]$ echo $PATH
/cm/local/apps/python3/bin:/cm/shared/apps/slurm/current/sbin:/cm/shared/apps/slurm/current/bin:/cm/local/apps/gcc/9.2.0/bin:/cm/local/apps/environment-modules/4.4.0//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ibutils/bin:/sbin:/usr/sbin:/cm/local/apps/environment-modules/4.4.0/bin:/opt/dell/srvadmin/bin:/bsuhome/yourUsername/.local/bin:/bsuhome/yourUsername/bin
You’ll notice a similarity to the output of the which
command. In this case,
there’s only one difference: the different directory at the beginning. When we
ran the module load
command, it added a directory to the beginning of our
$PATH
. Let’s examine what’s there:
[yourUsername@borah-login ~]$ ls /cm/local/apps/python3/bin/py*
py3clean pydoc3.5 python2 python3-config
py3compile pygettext python2.7 python3-futurize
py3versions pygettext2.7 python2.7-config python3m
pybuild pygettext3 python2-config python3m-config
pyclean pygettext3.5 python3 python3-pasteurize
pycompile pygobject-codegen-2.0 python3.5 python-config
pydoc pygtk-codegen-2.0 python3.5-config pyversions
pydoc2.7 pygtk-demo python3.5m
pydoc3 python python3.5m-config
Taking this to its conclusion, module load
will add software to your $PATH
.
It “loads” software. A special note on this - depending on which version of the
module
program that is installed at your site, module load
will also load
required software dependencies.
To demonstrate, let’s use module list
. module list
shows all loaded
software modules.
[yourUsername@borah-login ~]$ module list
Currently Loaded Modulefiles:
1) slurm/slurm/23.02.7
[yourUsername@borah-login ~]$ module load gromacs/2024.2
[yourUsername@borah-login ~]$ module list
Currently Loaded Modulefiles:
1) slurm/slurm/23.02.7 4) openmpi/4.1.3/gcc/12.1.0
2) borah-base/default 5) cuda_toolkit/12.3.0
3) gcc/12.1.0 6) gromacs/2024.2/openmpi/4.1.3/gcc/12.1.0
So in this case, loading the gromacs
module (a bioinformatics software
package), also loaded openmpi/4.1.3/gcc/12.1.0
and gcc/12.1.0
as well.
Let’s try unloading the gromacs
package.
[yourUsername@borah-login ~]$ module unload gromacs
[yourUsername@borah-login ~]$ module list
Currently Loaded Modulefiles:
1) slurm/slurm/23.02.7
So using module unload
“un-loads” a module, and depending on how a site is
configured it may also unload all of the dependencies. If we wanted to unload
everything at once, we could run module purge
(unloads everything).
[yourUsername@borah-login ~]$ module purge
[yourUsername@borah-login ~]$ module list
No Modulefiles Currently Loaded.
Note that this module loading process happens principally through
the manipulation of environment variables like $PATH
. There
is usually little or no data transfer involved.
The module loading process manipulates other special environment variables as well, including variables that influence where the system looks for software libraries, and sometimes variables which tell commercial software packages where to find license servers.
The module command also restores these shell environment variables to their previous state when a module is unloaded.
Software Versioning
So far, we’ve learned how to load and unload software packages. This is very useful. However, we have not yet addressed the issue of software versioning. At some point or other, you will run into issues where only one particular version of some software will be suitable. Perhaps a key bugfix only happened in a certain version, or version X broke compatibility with a file format you use. In either of these example cases, it helps to be very specific about what software is loaded.
Let’s examine the output of module avail
more closely.
[yourUsername@borah-login ~]$ module avail gcc
---------------------------- /cm/local/modulefiles ----------------------------
gcc/9.2.0
--------------------------- /cm/shared/modulefiles ----------------------------
gcc/7.5.0 gcc/8.2.0 gcc/10.2.0
Let’s take a closer look at the gcc
module. GCC is an extremely widely used
C/C++/Fortran compiler. Tons of software is dependent on the GCC version, and
might not compile or run if the wrong version is loaded. In this case, there
are four different versions: gcc/9.2.0
, gcc/7.5.0
, gcc/8.2.0
, and
gcc/10.2.0
. How do we load a specific version?
In this case, gcc/9.2.0
comes first, so if we type module load gcc
,
this is the copy that will be loaded.
[yourUsername@borah-login ~]$ module load gcc
[yourUsername@borah-login ~]$ module list
[yourUsername@borah-login ~]$ gcc --version
Currently Loaded Modulefiles:
1) gcc/9.2.0
gcc (GCC) 9.2.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
So how do we load a different copy of a software package? In this case, the
only change we need to make is be more specific about the module we are
loading. The only change we need to make to our module load
command is to leave in the version number after the /
.
[yourUsername@borah-login ~]$ module load gcc/10.2.0
[yourUsername@borah-login ~]$ gcc --version
gcc (GCC) 10.2.0
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
We now have successfully switched from GCC 9.2.0 to GCC 10.2.0.
Using Software Modules in Scripts
Create a job that is able to run
python3 --version
. Remember, no software is loaded by default! Running a job is just like logging on to the system (you should not assume a module loaded on the login node is loaded on a compute node).Solution
[yourUsername@borah-login ~]$ nano python-module.sh [yourUsername@borah-login ~]$ cat python-module.sh
#!/usr/bin/env bash #SBATCH #SBATCH -t 00:00:30 module load python/3.9.7 python3 --version
[yourUsername@borah-login ~]$ sbatch --partition=bsudfq python-module.sh
Key Points
Load software with
module load softwareName
.Unload software with
module unload
The module system handles software versioning and package conflicts for you automatically.
Transferring files with remote computers
Overview
Teaching: 10 min
Exercises: 5 minQuestions
How do I transfer files to (and from) the cluster?
Objectives
Transfer files to and from a computing cluster.
Performing work on a remote computer is not very useful if we cannot get files to or from the cluster. There are several options for transferring data between computing resources using CLI and GUI utilities, a few of which we will cover.
Download Lesson Files From the Internet
One of the most straightforward ways to download files is to use either curl
or wget
. One of these is usually installed in most Linux shells, on Mac OS
terminal and in GitBash. Any file that can be downloaded in your web browser
through a direct link can be downloaded using curl
or wget
. This is a
quick way to download datasets or source code. The syntax for these commands is
wget [-O new_name] https://some/link/to/a/file
curl [-o new_name] https://some/link/to/a/file
Try it out by downloading some material we’ll use later on (a wordlist from John Lawler at University of Michigan) from the following url:
https://websites.umich.edu/~jlawler/wordlist
Download the wordlist
By default,
curl
andwget
download files to the same name as the URL: in this case,wordlist
. Use one of the above commands to save the file aswordlist.txt
.
wget
andcurl
Commands[yourUsername@borah-login ~]$ wget -0 wordlist.txt https://websites.umich.edu/~jlawler/wordlist # or [yourUsername@borah-login ~]$ curl -o wordlist.txt -L https://websites.umich.edu/~jlawler/wordlist
The
-L
option tocurl
tells it to follow URL redirects (whichwget
does by default).
After downloading the file, use ls
to see it in your working directory:
[you@laptop:~]$ ls
Using the OnDemand File Browser
The ondemand “Files” tab provides a graphical interface to all your files on Borah.

Here you can edit, upload, download, etc.
Transferring Single Files and Folders With scp
To copy a single file to or from the cluster from our local computer, we can
use scp
(“secure copy”). The syntax can be a little complex for new users,
but we’ll break it down. The scp
command is a relative of the ssh
command
we used to access the system, and can use the same public-key authentication
mechanism.
To upload to another computer, the template command is
[you@laptop:~]$ scp local_file yourUsername@borah-login.boisestate.edu:remote_destination
in which @
and :
are field separators and remote_destination
is a path
relative to your remote home directory, or a new filename if you wish to change
it, or both a relative path and a new filename.
If you don’t have a specific folder in mind you can omit the
remote_destination
and the file will be copied to your home directory on the
remote computer (with its original name).
If you include a remote_destination
, note that scp
interprets this the same
way cp
does when making local copies:
if it exists and is a folder, the file is copied inside the folder; if it
exists and is a file, the file is overwritten with the contents of
local_file
; if it does not exist, it is assumed to be a destination filename
for local_file
.
Upload a file to your remote home directory like so:
[you@laptop:~]$ scp myfile yourUsername@borah-login.boisestate.edu:
Transferring a Directory
To transfer an entire directory, we add the -r
flag for “recursive”:
copy the item specified, and every item below it, and every item below those…
until it reaches the bottom of the directory tree rooted at the folder name you
provided.
[you@laptop:~]$ scp -r amdahl yourUsername@borah-login.boisestate.edu:
Caution
For a large directory – either in size or number of files – copying with
-r
can take a long time to complete.
When using scp
, you may have noticed that a :
always follows the remote
computer name.
A string after the :
specifies the remote directory you wish to transfer
the file or folder to, including a new name if you wish to rename the remote
material.
If you leave this field blank, scp
defaults to your home directory and the
name of the local material to be transferred.
On Linux computers, /
is the separator in file or directory paths.
A path starting with a /
is called absolute, since there can be nothing
above the root /
.
A path that does not start with /
is called relative, since it is not
anchored to the root.
If you want to upload a file to a location inside your home directory –
which is often the case – then you don’t need a leading /
. After the :
,
you can type the destination path relative to your home directory.
If your home directory is the destination, you can leave the destination
field blank, or type ~
– the shorthand for your home directory – for
completeness.
With scp
, a trailing slash on the target directory is optional, and has no effect.
Working with Windows
When you transfer text files from a Windows system to a Unix system (Mac, Linux, BSD, Solaris, etc.) this can cause problems. Windows encodes its files slightly different than Unix, and adds an extra character to every line.
On a Unix system, every line in a file ends with a
\n
(newline). On Windows, every line in a file ends with a\r\n
(carriage return + newline). This causes problems sometimes.Though most modern programming languages and software handles this correctly, in some rare instances, you may run into an issue. The solution is to convert a file from Windows to Unix encoding with the
dos2unix
command.You can identify if a file has Windows line endings with
cat -A filename
. A file with Windows line endings will have^M$
at the end of every line. A file with Unix line endings will have$
at the end of a line.To convert the file, just run
dos2unix filename
. (Conversely, to convert back to Windows format, you can rununix2dos filename
.)
Archiving Files
One of the biggest challenges we often face when transferring data between remote HPC systems is that of large numbers of files. There is an overhead to transferring each individual file and when we are transferring large numbers of files these overheads combine to slow down our transfers to a large degree.
The solution to this problem is to archive multiple files into smaller
numbers of larger files before we transfer the data to improve our transfer
efficiency.
Sometimes we will combine archiving with compression to reduce the amount of
data we have to transfer and so speed up the transfer.
The most common archiving command you will use on a (Linux) HPC cluster is
tar
.
tar
can be used to combine files and folders into a single archive file and,
optionally, compress the result.
Let’s look at the file we downloaded from the lesson site, amdahl.tar.gz
.
The .gz
part stands for gzip, which is a compression library.
To view the contents of a tarfile, without unpacking the file, we can use the
-t
flag.
tar
prints the “table of contents” with the -t
flag, for the file
specified with the -f
flag followed by the filename.
Note that you can concatenate the two flags: writing -t -f
is interchangeable
with writing -tf
together.
However, the argument following -f
must be a filename, so writing -ft
will
not work.
First download the example tarfile:
[yourUsername@borah-login ~]$ wget -O amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main
# or
[yourUsername@borah-login ~]$ curl -o amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main
Then list the contents:
[yourUsername@borah-login ~]$ tar -tf amdahl.tar.gz
hpc-carpentry-amdahl-46c9b4b/
hpc-carpentry-amdahl-46c9b4b/.github/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/python-publish.yml
hpc-carpentry-amdahl-46c9b4b/.gitignore
hpc-carpentry-amdahl-46c9b4b/LICENSE
hpc-carpentry-amdahl-46c9b4b/README.md
hpc-carpentry-amdahl-46c9b4b/amdahl/
hpc-carpentry-amdahl-46c9b4b/amdahl/__init__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/__main__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/amdahl.py
hpc-carpentry-amdahl-46c9b4b/requirements.txt
hpc-carpentry-amdahl-46c9b4b/setup.py
This example output shows a folder which contains a few files, where 46c9b4b
is an 8-character git commit hash that will change when the source
material is updated.
Now let’s unpack the archive. We’ll run tar
with a few common flags:
-x
to extract the archive-v
for verbose output-z
for gzip compression-f «tarball»
for the file to be unpacked
Extract the Archive
Using the flags above, unpack the source code tarball into a new directory named “amdahl” using
tar
.[yourUsername@borah-login ~]$ tar -xvzf amdahl.tar.gz
hpc-carpentry-amdahl-46c9b4b/ hpc-carpentry-amdahl-46c9b4b/.github/ hpc-carpentry-amdahl-46c9b4b/.github/workflows/ hpc-carpentry-amdahl-46c9b4b/.github/workflows/python-publish.yml hpc-carpentry-amdahl-46c9b4b/.gitignore hpc-carpentry-amdahl-46c9b4b/LICENSE hpc-carpentry-amdahl-46c9b4b/README.md hpc-carpentry-amdahl-46c9b4b/amdahl/ hpc-carpentry-amdahl-46c9b4b/amdahl/__init__.py hpc-carpentry-amdahl-46c9b4b/amdahl/__main__.py hpc-carpentry-amdahl-46c9b4b/amdahl/amdahl.py hpc-carpentry-amdahl-46c9b4b/requirements.txt hpc-carpentry-amdahl-46c9b4b/setup.py
Note that we did not need to type out
-x -v -z -f
, thanks to flag concatenation, though the command works identically either way – so long as the concatenated list ends withf
, because the next string must specify the name of the file to extract.
The folder has an unfortunate name, so let’s change that to something more convenient.
[yourUsername@borah-login ~]$ mv hpc-carpentry-amdahl-46c9b4b amdahl
Check the size of the extracted directory and compare to the compressed
file size, using du
for “disk usage”.
[you@laptop:~]$ du -sh amdahl.tar.gz
8.0K amdahl.tar.gz
[you@laptop:~]$ du -sh amdahl
48K amdahl
Text files (including Python source code) compress nicely: the “tarball” is one-sixth the total size of the raw data!
If you want to reverse the process – compressing raw data instead of
extracting it – set a c
flag instead of x
, set the archive filename,
then provide a directory to compress:
[you@laptop:~]$ tar -cvzf compressed_code.tar.gz amdahl
amdahl/
amdahl/.github/
amdahl/.github/workflows/
amdahl/.github/workflows/python-publish.yml
amdahl/.gitignore
amdahl/LICENSE
amdahl/README.md
amdahl/amdahl/
amdahl/amdahl/__init__.py
amdahl/amdahl/__main__.py
amdahl/amdahl/amdahl.py
amdahl/requirements.txt
amdahl/setup.py
If you give amdahl.tar.gz
as the filename in the above command, tar
will
update the existing tarball with any changes you made to the files.
That would mean adding the new amdahl
folder to the existing folder
(hpc-carpentry-amdahl-46c9b4b
) inside the tarball, doubling the size of the
archive!
Key Points
wget
andcurl -O
download a file from the internet.
scp
to transfer files to and from your computer.You can use the file browser in OnDemand to view and transfer files.
Using resources effectively
Overview
Teaching: 20 min
Exercises: 25 minQuestions
What are the different types of parallelism?
How do we execute a task in parallel?
What benefits arise from parallel execution?
What are the limits of gains from execution in parallel?
Objectives
Prepare a job submission script for the parallel executable.
Launch jobs with parallel execution.
Record and summarize the timing and accuracy of jobs.
Describe the relationship between job parallelism and performance.
We now have the full toolset we need to run a job, and we’re going to learn how to scale up our job performance using parallelism. This is a very important aspect of HPC systems, as parallelism is one of the primary tools we have to improve the performance of computational tasks.
In this lesson, we will learn about three ways to parallelize a problem: embarrassingly parallel, shared memory parallelism, and distributed parallelism.
Infinite Monkey Theorem
The infinite monkey theorem states that a monkey hitting keys independently and at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, including the complete works of William Shakespeare.

We don’t have infinite time or resources, but we can simulate this problem with A LOT of monkeys.
Create the following script to simulate a “monkey”. Let’s name this monkey.py
:
#!/usr/bin/env python3
import random
import string
nwords = 100
minlen = 2
maxlen = 10
dictionaryfile = "wordlist.txt"
# Generate random string of character of certain length
def randomchars(length):
return "".join([
random.choice(string.ascii_lowercase) for _ in range(length)
])
# Create a list of random character strings with range of lengths randomly
# distributed between minlen and maxlen
wordlist = [
randomchars(random.choice(range(minlen,maxlen))) for i in range(nwords)
]
# Read the words from our downloaded dictionary
with open(dictionaryfile, "r") as f:
englishwords = [ line.strip() for line in f ]
# Print all the randomly generated "words" that are also in the dictionary
for word in wordlist:
if word in englishwords:
print(word)
Let your monkey loose on a compute code
Create a submission file, requesting one task on a single node, then launch it.
[yourUsername@borah-login ~]$ nano monkey-job.sh
[yourUsername@borah-login ~]$ cat monkey-job.sh
#!/usr/bin/env bash
#SBATCH -J monkey
#SBATCH -p short
#SBATCH -N 1
#SBATCH -n 1
# Load the computing environment we need
module load python/3.9.7
# Execute the task
python ./monkey.py
[yourUsername@borah-login ~]$ sbatch monkey-job.sh
As before, use the Slurm status commands to check whether your job is running and when it ends:
[yourUsername@borah-login ~]$ squeue --me
Use ls
to locate the output file. The -t
flag sorts in
reverse-chronological order: newest first. What was the output?
Read the Job Output
The cluster output should be written to a file in the folder you launched the job from. For example,
[yourUsername@borah-login ~]$ ls -t
slurm-2114623.out monkey-job.sh monkey.py wordlist.txt
[yourUsername@borah-login ~]$ cat slurm-2114623.out
wow ld ax tk bl kg
Your monkey no doubt came up with a different list of words.
Now we have our single “monkey” how can we scale this up using a compute cluster?
Running multiple jobs at once
This is an example of an embarassingly parallel problem: Each monkey doesn’t need to know anything about what the other monkeys are doing. So if our goal is for the monkeys to generate the most words, more monkeys is the solution.
By making the following modification to our submission script (monkey-job.sh
),
we can use a job array to put multiple monkeys to work!
#!/usr/bin/env bash
#SBATCH -J monkey
#SBATCH -p short
#SBATCH --array=0-10
#SBATCH -N 1
#SBATCH -n 1
# Load the computing environment we need
module load python/3.9.7
# Execute the task
python ./monkey.py
After modifying your submission script, resubmit your job:
[yourUsername@borah-login ~]$ sbatch monkey-job.sh
You can check on your job while it’s running using:
[yourUsername@borah-login ~]$ squeue --me
Or see the time, hostname, and exitcode of finished jobs using:
[yourUsername@borah-login ~]$ sacct -X
Once your job is finished, we can use the word count program, wc
, to see how
many words were generated by our array of monkeys:
[yourUsername@borah-login ~]$ wc -l slurm-2114626_*
8 slurm-2114626_0.out
2 slurm-2114626_10.out
6 slurm-2114626_1.out
4 slurm-2114626_2.out
7 slurm-2114626_3.out
11 slurm-2114626_4.out
8 slurm-2114626_5.out
4 slurm-2114626_6.out
4 slurm-2114626_7.out
5 slurm-2114626_8.out
10 slurm-2114626_9.out
69 total
By using a job array, we were able to increase our word output with no changes to our code and a single change to our submission script. Next we’ll learn about more complex types of parallelism.
Distributed vs shared memory parallelism

Hands on activity
To demonstrate the difference between these types of parallelism, your instructor will lead you through an activity.
Takeaway
Shared memory parallelism requires workers to be able to access the same information. In the puzzle example, the people working together at the same table can reach the same pieces. In the HPC world, shared memory parallelism can be used on a single compute node, where the CPU cores can access the same memory.
Distributed memory parallelism requires a communication framework to give a task to and collect output from each worker. In the puzzle example, people seated at different tables must have someone bring them pieces, take the pieces back, and organize the partial results. In the HPC world, a framework called MPI allows workers across multiple nodes to communicate over a specialized network.
Choosing the right parameters for your SLURM job
When submitting jobs to SLURM, choosing the right combination of --ntasks
and
--cpus-per-task
depends on understanding how your program implements
parallelism.
--ntasks
specifies the number of separate processes with independent memory
spaces. Each task runs as a distinct process that cannot directly access
another task’s memory. This maps to distributed memory parallelism, where
processes communicate through message passing (like MPI). Use --ntasks
when
your program:
- Uses MPI (
mpirun -n 4 your_program
) - Runs multiple independent instances
- Needs processes that communicate via networks or files
--cpus-per-task
specifies how many CPU cores each task can use through
threads that share the same memory space. This maps to shared memory
parallelism, where threads within a process can directly access the same
variables and data structures. Use --cpus-per-task
when your program:
- Uses OpenMP (
export OMP_NUM_THREADS=8
) - Uses threading libraries (pthread, tbb, omp)
- Parallelizes loops with shared data structures
Giving your job more resources doesn’t necessarily mean it will have better performance. It’s important to evaluate what kind of parallelization your program can use and tailor your resource request to fit. To learn more about parallelization, see the parallel novice lesson lesson and the Cornell Virtual Workshop on Parallel Programming Concepts and High Performance Computing.
Key Points
Parallel programming allows applications to take advantage of parallel hardware.
The queuing system facilitates executing parallel tasks.
Some performance improvements from parallel execution do not scale linearly.
Using shared resources responsibly
Overview
Teaching: 15 min
Exercises: 5 minQuestions
How can I be a responsible user?
How can I protect my data?
How can I best get large amounts of data off an HPC system?
Objectives
Describe how the actions of a single user can affect the experience of others on a shared system.
Discuss the behaviour of a considerate shared system citizen.
Explain the importance of backing up critical data.
Describe the challenges with transferring large amounts of data off HPC systems.
Convert many files to a single archive file using tar.
One of the major differences between using remote HPC resources and your own system (e.g. your laptop) is that remote resources are shared. How many users the resource is shared between at any one time varies from system to system, but it is unlikely you will ever be the only user logged into or using such a system.
The widespread usage of scheduling systems where users submit jobs on HPC resources is a natural outcome of the shared nature of these resources. There are other things you, as an upstanding member of the community, need to consider.
Be Kind to the Login Nodes
The login node is often busy managing all of the logged in users, creating and editing files and compiling software. If the machine runs out of memory or processing capacity, it will become very slow and unusable for everyone. While the machine is meant to be used, be sure to do so responsibly – in ways that will not adversely impact other users’ experience.
Login nodes are always the right place to launch jobs. Cluster policies vary, but they may also be used for proving out workflows, and in some cases, may host advanced cluster-specific debugging or development tools. The cluster may have modules that need to be loaded, possibly in a certain order, and paths or library versions that differ from your laptop, and doing an interactive test run on the head node is a quick and reliable way to discover and fix these issues.
Login Nodes Are a Shared Resource
Remember, the login node is shared with all other users and your actions could cause issues for other people. Think carefully about the potential implications of issuing commands that may use large amounts of resource.
Unsure? Ask your friendly systems administrator (“sysadmin”) if the thing you’re contemplating is suitable for the login node, or if there’s another mechanism to get it done safely.
You can always use the commands top
and ps ux
to list the processes that
are running on the login node along with the amount of CPU and memory they are
using. If this check reveals that the login node is somewhat idle, you can
safely use it for your non-routine processing task. If something goes wrong
– the process takes too long, or doesn’t respond – you can use the
kill
command along with the PID to terminate the process.
Login Node Etiquette
Which of these commands would be a routine task to run on the login node?
python physics_sim.py
make
create_directories.sh
molecular_dynamics_2
tar -xzf R-3.3.0.tar.gz
Solution
Building software, creating directories, and unpacking software are common and acceptable > tasks for the login node: options #2 (
make
), #3 (mkdir
), and #5 (tar
) are probably OK. Note that script names do not always reflect their contents: before launching #3, pleaseless create_directories.sh
and make sure it’s not a Trojan horse.Running resource-intensive applications is frowned upon. Unless you are sure it will not affect other users, do not run jobs like #1 (
python
) or #4 (custom MD code). If you’re unsure, ask your friendly sysadmin for advice.
If you experience performance issues with a login node you should report it to the system staff, for them to investigate.
Test Before Scaling
Remember that you are generally charged for usage on shared systems. A simple mistake in a job script can end up costing a large amount of resource budget. Imagine a job script with a mistake that makes it sit doing nothing for 24 hours on 1000 cores or one where you have requested 2000 cores by mistake and only use 100 of them! When this happens it hurts both you (as you waste lots of charged resource) and other users (who are blocked from accessing the idle compute nodes). On very busy resources you may wait many days in a queue for your job to fail within 10 seconds of starting due to a trivial typo in the job script. This is extremely frustrating!
Most systems provide dedicated resources for testing that have short wait times to help you avoid this issue.
Test Job Submission Scripts That Use Large Amounts of Resources
Before submitting a large run of jobs, submit one as a test first to make sure everything works as expected.
Before submitting a very large or very long job, submit a short truncated test to ensure that the job starts as expected.
Have a Backup Plan
Although many HPC systems keep backups, it does not always cover all the file systems available and may only be for disaster recovery purposes (i.e. for restoring the whole file system if lost rather than an individual file or directory you have deleted by mistake). Protecting critical data from corruption or deletion is primarily your responsibility: keep your own backup copies.
Version control systems (such as Git) often have free, cloud-based offerings (e.g., GitHub and GitLab) that are generally used for storing source code. Even if you are not writing your own programs, these can be very useful for storing job scripts, analysis scripts and small input files.
If you are building software, you may have a large amount of source code that you compile to build your executable. Since this data can generally be recovered by re-downloading the code, or re-running the checkout operation from the source code repository, this data is also less critical to protect.
For larger amounts of data, especially important results from your runs,
which may be irreplaceable, you should make sure you have a robust system in
place for taking copies of data off the HPC system wherever possible
to backed-up storage. Tools such as rsync
can be very useful for this.
Your access to the shared HPC system will generally be time-limited so you should ensure you have a plan for transferring your data off the system before your access finishes. The time required to transfer large amounts of data should not be underestimated and you should ensure you have planned for this early enough (ideally, before you even start using the system for your research).
In all these cases, the helpdesk of the system you are using should be able to provide useful guidance on your options for data transfer for the volumes of data you will be using.
Your Data Is Your Responsibility
Make sure you understand what the backup policy is on the file systems on the system you are using and what implications this has for your work if you lose your data on the system. Plan your backups of critical data and how you will transfer data off the system throughout the project.
Transferring Data
As mentioned above, many users run into the challenge of transferring large amounts of data off HPC systems at some point (this is more often in transferring data off than onto systems but the advice below applies in either case). Data transfer speed may be limited by many different factors so the best data transfer mechanism to use depends on the type of data being transferred and where the data is going.
The components between your data’s source and destination have varying levels of performance, and in particular, may have different capabilities with respect to bandwidth and latency.
Bandwidth is generally the raw amount of data per unit time a device is capable of transmitting or receiving. It’s a common and generally well-understood metric.
Latency is a bit more subtle. For data transfers, it may be thought of as the amount of time it takes to get data out of storage and into a transmittable form. Latency issues are the reason it’s advisable to execute data transfers by moving a small number of large files, rather than the converse.
Some of the key components and their associated issues are:
- Disk speed: File systems on HPC systems are often highly parallel, consisting of a very large number of high performance disk drives. This allows them to support a very high data bandwidth. Unless the remote system has a similar parallel file system you may find your transfer speed limited by disk performance at that end.
- Meta-data performance: Meta-data operations such as opening and closing files or listing the owner or size of a file are much less parallel than read/write operations. If your data consists of a very large number of small files you may find your transfer speed is limited by meta-data operations. Meta-data operations performed by other users of the system can also interact strongly with those you perform so reducing the number of such operations you use (by combining multiple files into a single file) may reduce variability in your transfer rates and increase transfer speeds.
- Network speed: Data transfer performance can be limited by network speed. More importantly it is limited by the slowest section of the network between source and destination. If you are transferring to your laptop/workstation, this is likely to be its connection (either via LAN or WiFi).
- Firewall speed: Most modern networks are protected by some form of firewall that filters out malicious traffic. This filtering has some overhead and can result in a reduction in data transfer performance. The needs of a general purpose network that hosts email/web-servers and desktop machines are quite different from a research network that needs to support high volume data transfers. If you are trying to transfer data to or from a host on a general purpose network you may find the firewall for that network will limit the transfer rate you can achieve.
As mentioned above, if you have related data that consists of a large number of
small files it is strongly recommended to pack the files into a larger
archive file for long term storage and transfer. A single large file makes
more efficient use of the file system and is easier to move, copy and transfer
because significantly fewer metadata operations are required. Archive files can
be created using tools like tar
and zip
. We have already met tar
when we
talked about data transfer earlier.
Consider the Best Way to Transfer Data
If you are transferring large amounts of data you will need to think about what may affect your transfer performance. It is always useful to run some tests that you can use to extrapolate how long it will take to transfer your data.
Say you have a “data” folder containing 10,000 or so files, a healthy mix of small and large ASCII and binary data. Which of the following would be the best way to transfer them to Borah?
[you@laptop:~]$ scp -r data yourUsername@borah-login.boisestate.edu:~/
[you@laptop:~]$ rsync -ra data yourUsername@borah-login.boisestate.edu:~/
[you@laptop:~]$ rsync -raz data yourUsername@borah-login.boisestate.edu:~/
[you@laptop:~]$ tar -cvf data.tar data [you@laptop:~]$ rsync -raz data.tar yourUsername@borah-login.boisestate.edu:~/
[you@laptop:~]$ tar -cvzf data.tar.gz data [you@laptop:~]$ rsync -ra data.tar.gz yourUsername@borah-login.boisestate.edu:~/
Solution
scp
will recursively copy the directory. This works, but without compression.rsync -ra
works likescp -r
, but preserves file information like creation times. This is marginally better.rsync -raz
adds compression, which will save some bandwidth. If you have a strong CPU at both ends of the line, and you’re on a slow network, this is a good choice.- This command first uses
tar
to merge everything into a single file, thenrsync -z
to transfer it with compression. With this large number of files, metadata overhead can hamper your transfer, so this is a good idea.- This command uses
tar -z
to compress the archive, thenrsync
to transfer it. This may perform similarly to #4, but in most cases (for large datasets), it’s the best combination of high throughput and low latency (making the most of your time and network connection).
Key Points
Be careful how you use the login node.
Your data on the system is your responsibility.
Plan and test large data transfers.
It is often best to convert many files to a single archive file before transferring.