Difference between revisions of "MARVIN Cluster"

From CAC Documentation wiki
Jump to navigation Jump to search
Line 91: Line 91:
 
| module load python/3.8.3
 
| module load python/3.8.3
 
|}
 
|}
 
==Quick Tutorial==
 
The batch system treats each core of a node as a "virtual processor." That means the nodes keyword in batch scripts refers to the number of cores that are scheduled.
 
 
===Select your default MPI===
 
There are several versions of MPI on the Marvin cluster. Use the following commands to modify your default mpi.
 
 
:* mpi-selector --query  -> shows your default mpi
 
:* mpi-selector --list  -> shows all available mpi installations
 
:* mpi-selector --set <mpi installation>  -> sets your default mpi, note, you will have to exit and log back in for this to take effect.
 
 
===Running an MPI Job on the Whole Cluster===
 
:*assuming /opt/openmpi/ is the default, the mpiexec options may change depending on your selected MPI.
 
:*First use showq to see how many cores are available. It may be less than 1152 if a node is down.
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=96:ppn=12    (note, this is PBS -l (small case L))
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
 
mpiexec --hostfile $PBS_NODEFILE <executable>    (where executable is the program you wish to run)
 
</source>
 
 
===Running an MPI Job using 12 Tasks Per Node===
 
Because the nodes have 12 physical cores, you may want to limit jobs to 12 tasks per node.
 
The node file lists each node 1 time, so make a copy with each node listed 12 times, and
 
hand that version to MPI.
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=48
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
 
# Construct a copy of the hostfile with only 12 entries per node.
 
# MPI can use this to run 12 tasks on each node.
 
uniq "$PBS_NODEFILE"|awk '{for(i=0;i<12;i+=1) print}'>nodefile.12way
 
 
# to Run 12-way on 4 nodes, we request 48 core to obtain 4 nodes
 
mpiexec --hostfile nodefile.12way ring -v
 
</source>
 
 
 
===Running Many Copies of a Serial Job===
 
In order to run 30 separate instances of the same program, use the scheduler's task array feature, through the "-t" option. The "nodes" parameter here refers to a core.
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=1  (note, this is PBS -l (small case L))
 
#PBS -t 30
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
echo Run my job.
 
</source>
 
 
When you start jobs this way, separate jobs will pile one-per-core onto nodes like a box of hamsters.
 
 
===Running on a specific node===
 
To run on a specific node use the host= option
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l host=compute-3-16      (note, this is PBS -l (small case L))
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
echo Run my job.
 
</source>
 
===Running in the viz queue ===
 
To run in the viz queue use the -q option
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=1      (note, this is PBS -l (small case L))
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
#PBS -q viz
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
echo Run my job.
 
</source>
 
===Running an interactive job===
 
from the command line:
 
qsub -l nodes=1 -I
 

Revision as of 08:35, 29 May 2020

This is a private cluster.

Hardware

  • Head node: marvin.cac.cornell.edu.
  • access modes: ssh
  • OpenHPC v1.3.8 with CentOS 7.8
  • 86 compute nodes with Dual 6-core X5670 CPUs @ 3 GHz, Hyperthreaded, 24 GB of RAM; 3 high memory nodes with 96 GB of RAM
  • Cluster Status: Ganglia.
  • "Why use a temporary directory?"
  • Submit HELP requests: Help OR by sending email to: help@cac.cornell.edu

File Systems

Home Directories

  • Path: ~

User home directories are hosted on the head node and exported to the compute nodes via NFS. Unless special arrangements are made, data in user home directories are NOT backed up.

Globus Access

User home directories can be accessed on Globus. Under "File Manager" tab in Globus web GUI:

  1. Access "cac#marvin" endpoint.
  2. Authenticate using your CAC user name and password if prompted.

Scheduler/Queues

  • The cluster scheduler is Slurm. See Slurm documentation page for details.
  • Note: hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs. See the slurm options section for using the correct options for your job.
  • Partitions:
Name Description Time Limit
viz 3 visualization Ensight Servers, each has 96GB RAM none
normal (default) all nodes except for those in viz queue none
all all cluster nodes none

Software

Software Path Notes
*GNU Compilers 8.3.0 /opt/ohpc/pub/compiler/gcc/8.3.0 module load gnu8/8.3.0
*openmpi 3.1.4 /opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4 or /opt/ohpc/pub/mpi/openmpi3-intel/3.1.4 module load openmpi3/3.1.4
Intel Parallel Studio XE 2020.1.217 /opt/ohpc/pub/compiler/intel/2020/ module swap gnu8 intel/20.1.2017
Intel MPI 2020.1.217 /opt/ohpc/pub/compiler/intel/2020/compilers_and_libraries_2020.1.217/linux/mpi module load impi/2020.1.217
mvapich2 2.3.2 /opt/ohpc/pub/mpi/mvapich2-gnu/2.3.2 or /opt/ohpc/pub/mpi/mvapich2-intel/2.3.2 module load mvapich2/2.3.2
fftw 3.3.8 /opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8 or /opt/ohpc/pub/libs/gnu8/mvapich2/fftw/3.3.8 module load fftw/3.3.8
hypre 2.18.1 /opt/ohpc/pub/libs/gnu8/openmpi3/hypre/2.18.1, /opt/ohpc/pub/libs/gnu8/impi/hypre/2.18.1, /opt/ohpc/pub/libs/intel/openmpi3/hypre/2.18.1, or /opt/ohpc/pub/libs/intel/impi/hypre/2.18.1 module load hypre/2.18.1
ensight 10.1.4a /opt/ohpc/pub/apps/ensight/10.1.4a module load ensight/10.1.4a
VisIt 3.0.1 /opt/ohpc/pub/apps/visit/3.0.1/bin module load visit/3.0.1
python 3.8.3 /opt/ohpc/pub/utils/python/3.8.3 module load python/3.8.3