• Head node:
  • Access modes: ssh
  • OpenHPC 2.3 with Rocky Linux 8.4
  • 4 compute nodes (c0001-c0004). Each node has dual 64-core AMD EPYC 7713 processors, 1 TB of RAM, and 4 NVidia A100 GPUs
  • Hyperthreading is enabled on all nodes, i.e., each physical core is considered to consist of two logical CPUs
  • Interconnect is 100 Gbps ethernet
  • Submit HELP requests: help OR by sending an email to CAC Support please include Altas in the subject area.

File Systems

Home Directories

  • Path: ~

    User home directories is located on a NFS export from the head node. Use your home directory (~) for archiving the data you wish to keep. Data in user's home directories are NOT backed up.

  • Globus Collection: Altas Cluster. See the File Transfer using Globus page for access instructions.


  • The cluster scheduler is Slurm. All nodes are configured to be in the "normal" partition with no time limits. See Slurm documentation page for details. The Slurm Quick Start guide is a great place to start. See the Requesting GPUs section for information on how to request GPUs on compute nodes for your jobs.
    1. --gres=gpu:2g.20gb:<number of MIG devices> or --gres=gpu:1g.10gb:1 to request MIG devices. The job will land on one of c0002, c0003, or c0004.
    2. --gres=gpu:a100:<number of GPUs> to request entire A100 GPUs. The job will land on node c0001.
  • Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
  • Partitions (queues):

    Name Description Time Limit
    Normal all nodes, each node with 4 Nvidia A100 GPUs no limit


Work with Environment Modules

Set up the working environment for each package using the module command. The module command will activate dependent modules if there are any.

To show currently loaded modules: (These modules are loaded by default system configurations)

-bash-4.2$ module list

Currently Loaded Modules:
1) autotools   3) gnu9/9.3.0   5) libfabric/1.12.1   7) ohpc
2) prun/2.1    4) ucx/1.9.0    6) openmpi4/4.0.5

To show all available modules (as of August 5, 2021):

-bash-4.2$ module avail
    -------------------- /opt/ohpc/pub/moduledeps/gnu9-openmpi4 --------------------
       adios/1.13.1        netcdf-fortran/4.5.2    py3-mpi4py/3.0.3
       boost/1.75.0        netcdf/4.7.3            py3-scipy/1.5.1
       fftw/3.3.8          opencoarrays/2.9.2      quantum-espresso/6.8
       hypre/2.18.1        petsc/3.14.4            scalapack/2.1.0
       mfem/4.2            phdf5/1.10.6            slepc/3.14.2
       mumps/5.2.1         pnetcdf/1.12.1          superlu_dist/6.1.1
       netcdf-cxx/4.3.1    ptscotch/6.0.6          trilinos/13.0.0
    ------------------------ /opt/ohpc/pub/moduledeps/gnu9 -------------------------
       autotools    (L)    libfabric/1.12.1 (L)    os
       cmake/3.19.4        matlab/R2021a           prun/2.1        (L)
       cuda/11.5           nvhpc/21.9              ucx/1.9.0       (L)
       gnu9/9.3.0   (L)    ohpc             (L)    valgrind/3.16.1

    L:  Module is loaded

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

To load a module and verify:

-bash-4.2$ module load matlab/R2021a
-bash-4.2$ module list

Currently Loaded Modules:
Currently Loaded Modules:
1) autotools   3) gnu9/9.3.0   5) libfabric/1.12.1   7) ohpc
2) prun/2.1    4) ucx/1.9.0    6) openmpi4/4.0.5     8) matlab/R2021a

To unload a module and verify:

-bash-4.2$ module unload matlab/R2021a
-bash-4.2$ module list

Currently Loaded Modules:
 1) autotools   3) gnu9/9.3.0   5) libfabric/1.12.1   7) ohpc
 2) prun/2.1    4) ucx/1.9.0    6) openmpi4/4.0.5

Manage Modules in Your Python Virtual Environment

python3 (3.6) is installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see the documentation on virtual environments on for details.

Create Virtual Environment

You can create as many virtual environments, each in their own directory, as needed. E.g. for python3:

python3 -m venv <your virtual environment directory>

Activate Virtual Environment

You need to activate a virtual environment before using it:

source <your virtual environment directory>/bin/activate

Install Python Modules Using pip

After activating your virtual environment, you can now install python modules for the activated environment:

  • It's always a good idea to update pip first:
    pip install --upgrade pip
    • Install the module:

      pip install * List installed python modules in the environment:

      pip list modules * Examples: Install tensorflow and keras like this:

      -bash-4.2$ python3 -m venv tensorflow -bash-4.2$ source tensorflow/bin/activate (tensorflow) -bash-4.2$ pip install --upgrade pip Collecting pip Using cached Installing collected packages: pip Found existing installation: pip 18.1 Uninstalling pip-18.1: Successfully uninstalled pip-18.1 Successfully installed pip-19.2.3 (tensorflow) -bash-4.2$ pip install tensorflow keras Collecting tensorflow Using cached : : : Successfully installed absl-py-0.8.0 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-2.2.5 keras-applications-1.0.8 [...] (tensorflow) -bash-4.2$ pip list modules Package Version

      absl-py 0.8.0
      astor 0.8.0
      gast 0.2.2
      google-pasta 0.1.7
      grpcio 1.23.0 h5py 2.9.0
      Keras 2.2.5
      Keras-Applications 1.0.8
      Keras-Preprocessing 1.1.0
      Markdown 3.1.1
      numpy 1.17.1 pip 19.2.3 protobuf 3.9.1
      PyYAML 5.1.2
      scipy 1.3.1
      setuptools 40.6.2 six 1.12.0 tensorboard 1.14.0 tensorflow 1.14.0 tensorflow-estimator 1.14.0 termcolor 1.1.0
      Werkzeug 0.15.5 wheel 0.33.6 wrapt 1.11.2

Software List

Software Path Notes
GCC 9.3 /opt/ohpc/pub/compiler/gcc/9.3.0/ module load gnu9/9.3.0 (Loaded by default)
Open MPI 4.0.5 /opt/ohpc/pub/mpi/openmpi4-gnu9/4.0.5 module load openmpi4/4.0.5 (Loaded by default)
Matlab R2021a /opt/ohpc/pub/apps/matlab/R2021a module load matlab/R2021a
Nvidia HPC SDK 21.9 /opt/ohpc/pub/compiler/nvhpc/21.9 Nvidia HPC SDK includes nvfortran for compiling CUDA enabled fortran code.

module load nvhpc/21.9
ollama 0.1.32 /opt/ohpc/pub/apps/ollama/0.1.32 module load ollama/0.1.32
Python 3.12.0 /opt/ohpc/pub/utils/python/3.12.0 module load python/3.12.0
Quantum Espresso 6.8 /opt/ohpc/pub/apps/quantum-espresso/6.8 module load quantum-espresso/6.8
WINE 6.22 /opt/ohpc/pub/apps/wine/6.0.2 module load wine/6.0.2


Submit questions or requests at help or by sending email to: Please include ALTAS in the subject area.