Difference between revisions of "AIDA Cluster"

From CAC Documentation wiki
Jump to navigation Jump to search
Line 53: Line 53:
 
:* Users should copy active files and run their codes from BeeGFS directories.
 
:* Users should copy active files and run their codes from BeeGFS directories.
  
==Scheduler/Queues==
+
==Scheduler/Partitions (queues)==
 
:* The cluster scheduler is Slurm.  
 
:* The cluster scheduler is Slurm.  
 
:* See [[ slurm | Slurm documentation page ]] for details.  
 
:* See [[ slurm | Slurm documentation page ]] for details.  
 +
:* Some of the NVIDIA A100s have MIG configured.
 
:* To list the configured resources available for scheduling:
 
:* To list the configured resources available for scheduling:
 
<pre>
 
<pre>
Line 63: Line 64:
 
:*# <code>--gres=gpu:2g.20gb:<number of MIG devices></code> or <code>--gres=gpu:1g.10gb:1</code> to request MIG devices.  
 
:*# <code>--gres=gpu:2g.20gb:<number of MIG devices></code> or <code>--gres=gpu:1g.10gb:1</code> to request MIG devices.  
 
:* The job will land on one of the A100 nodes with MIG configured.
 
:* The job will land on one of the A100 nodes with MIG configured.
:*# <code>--gres=gpu:a100:<number of GPUs></code> to request entire A100 GPUs. The job will land on an A100 node with no MIG.
+
:*# <code>--gres=gpu:a100:1</code> to request entire A100 GPUs.  
 +
:*The job will land on an A100 node with no MIG.
 +
:*# <code>--gres=gpu:v100:1</code> to request a single V100 GPU.
 +
:*The job will land on an V100 node.
 
:* Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
 
:* Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
:* You can ensure that your MPI tasks uses the full physical core by specifying -c 2
+
:* You can ensure that your MPI tasks uses the full physical core by specifying -c 2 in your slurm job.
 
:* Partitions (queues):
 
:* Partitions (queues):
 
::{| border="1" cellspacing="0" cellpadding="10"
 
::{| border="1" cellspacing="0" cellpadding="10"

Revision as of 10:21, 22 September 2022

AIDA General Information

  • Aida is a private cluster with restricted access to members of bs54_0001 and rab38_0001 groups.
  • Head node: aida.cac.cornell.edu (access via ssh)
  • Running Rocky 8.5 and built with OpenHPC 2
  • 12 GPU nodes
    • 6 with V100 GPUs (c0017-c0022)
    • 6 with A100 GPUs (c0071-c0076)
  • many nodes from the former Atlas2 cluster

Hardware

  • All GPU nodes support vector extensions up to AVX-512
  • All nodes have hyperthreading turned on.

c00[17-22]:

    2x18 core Intel Xeon Skylake 6154 CPUs with base clock 3GHz (turbo up to 3.7GHz)

c0017: x5 GPU/Nvidia Tesla V100 16GB

    Memory: 754GB
    swap: 187GB
    /tmp: 700GB

c00[18-21]: x5 GPU/Nvidia Tesla V100 16GB

     Memory: 376GB
     swap: 187GB
     /tmp: 700GB

c0022: x2 GPU/Nvidia Tesla V100 16GB

     Memory: 1.5TB 
     swap: 187GB
     /tmp: 100GB
     /scratch: 1TB

c00[71-76]:

    2x28 core Intel Xeon Ice Lake Gold 6348 CPUs with base clock 2.6GHz
    x4 GPU/Nvidia Tesla A100 80GB
    Memory: 1TB
    swap: 187GB
    /tmp: 3TB

Networking

  • 12 GPU nodes have Infiniband
  • older Atlas2 nodes have gigabit ethernet

File Systems

Home Directories

  • Path: ~
  • User's home directories are located on a NFS export from the AIDA head node. Use your home directory (~) for archiving the data you wish to keep. Data in user's home directories are NOT backed up.

BeeGFS

  • Parallel File System
  • Path: /mnt/beegfs
  • Directories: bulk and fast
  • All users have access to the BeeGFS file systems.
  • Users should copy active files and run their codes from BeeGFS directories.

Scheduler/Partitions (queues)

  • The cluster scheduler is Slurm.
  • See Slurm documentation page for details.
  • Some of the NVIDIA A100s have MIG configured.
  • To list the configured resources available for scheduling:
$ sinfo -o "%20N  %10c  %10m  %25f  %10G "
  • See the Requesting GPUs section for information on how to request GPUs on compute nodes for your jobs.
    1. --gres=gpu:2g.20gb:<number of MIG devices> or --gres=gpu:1g.10gb:1 to request MIG devices.
  • The job will land on one of the A100 nodes with MIG configured.
    1. --gres=gpu:a100:1 to request entire A100 GPUs.
  • The job will land on an A100 node with no MIG.
    1. --gres=gpu:v100:1 to request a single V100 GPU.
  • The job will land on an V100 node.
  • Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
  • You can ensure that your MPI tasks uses the full physical core by specifying -c 2 in your slurm job.
  • Partitions (queues):
Name Description Time Limit
normal xxxxxxxxxx no limit

Software

Work with Environment Modules

Set up the working environment for each software package using the module command. The module command will activate dependent modules if there are any.

To show currently loaded modules:
$ module list

To show all available modules:
$ module avail

To load a module:
$ module load <software>

To unload a module:
$ module unload <software>

To swap compilers:
$ module swap gnu9 intel

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Using Python Virtual Environments

python3 (3.6) is installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see the documentation on virtual environments on python.org for details.

Create Virtual Environment

You can create as many virtual environments, each in their own directory, as needed.

  • python3: python3 -m venv <your virtual environment directory>

Activate Virtual Environment

You need to activate a virtual environment before using it:

source <your virtual environment directory>/bin/activate

Install Python Modules Using pip

After activating your virtual environment, you can now install python modules for the activated environment:

  • It's always a good idea to update pip first:
pip install --upgrade pip
  • Install the module:
pip install <module name>
  • List installed python modules in the environment:
pip list modules

Software List

Software Path Notes
GCC 9.4
/opt/ohpc/pub/compiler/gcc/9.4.0/
  • module load gnu9/9.4.0 (Loaded by default)
Open MPI 4.1.1
/opt/ohpc/pub/mpi/openmpi4-gnu9/4.1.1
  • module load openmpi4/4.1.1 (Loaded by default)
Quantum Espresso 6.8
/opt/ohpc/pub/apps/quantum-espresso/6.8
  • module load quantum-espresso/6.8

Help

  • Submit questions or requests at help or by sending email to: help@cac.cornell.edu. Please include AIDA in the subject area.