Difference between revisions of "THECUBE Cluster"

From CAC Documentation wiki
Jump to navigation Jump to search
Line 95: Line 95:
 
== Managing Modules in Your Python Virtual Environment ==
 
== Managing Modules in Your Python Virtual Environment ==
  
 +
Both python2 (2.7) and python3 (3.6) are installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see [https://packaging.python.org/guides/installing-using-pip-and-virtual-environments the documentation on virtual environments on python.org] for details.
  
 +
=== Create Virtual Environment ===
 +
 +
You can '''create''' as many virtual environments, each in their own directory, as needed.
 +
 +
* python2: <code>python -m virtualenv <your virtual environment directory></code>
 +
 +
* python3: <code>python3 -m env <your virtual environment directory></code>
 +
 +
=== Activate Virtual Environment ===
 +
 +
You need to '''activate''' a virtual environment before using it:
 +
 +
<pre>source <your virtual environment directory>/bin/activate</pre>
 +
 +
=== Install python modules Using pip ===
 +
 +
After activating your virtual environment, you can now install python modules for the activated environment:
 +
 +
* It's always a good idea to update <code>pip</code> first:
 +
<pre>pip install --upgrade pip</pre>
 +
 +
* Install the module:
 +
<pre>pip install <module name></pre>
 +
 +
* List installed python modules in the environment:
 +
<pre>pip list modules</pre>
  
 
== Software List ==
 
== Software List ==

Revision as of 09:54, 1 September 2019

This is a private cluster.

Hardware

  • Head node: thecube.cac.cornell.edu.
  • access modes: ssh
  • OpenHPC v1.3.8 with CentOS 7.6
  • 32 compute nodes with Dual 8-core E5-2680 CPUs @ 2.7 GHz, 128 GB of RAM
  • THECUBE Cluster Status: Ganglia.
  • Submit HELP requests: help OR by sending an email to CAC support please include THECUBE in the subject area.

File Systems

Home Directories

  • Path: ~

User home directories is located on a NFS export from the head node. Use your home directory (~) for archiving the data you wish to keep. Do NOT use this file system for computation as bandwidth to the compute nodes is very limited and will quickly be overwhelmed by file I/Os from large jobs.

Unless special arrangements are made, data in user's home directories are NOT backed up.

Scratch File System

LUSTRE file system runs Intel Lustre 2.7:

  • Path: /scratch/<user name>

The scratch file system is a fast parallel file system. Use this file system for scratch space for your jobs. Copy the results you want to keep back to your home directory for safe keeping.

Scheduler/Queues

  • The cluster scheduler is slurm. All nodes are configured to be in the "normal" partition with no time limits. See Slurm documentation page for details.
  • Partitions (queues):
Name Description Time Limit
normal all nodes no limit

Software

Working with Environment Modules

Set up the working environment for each package using the module command. The module command will activate dependent modules if there are any.

To show currently loaded modules: (These modules are loaded by default system configurations)

-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc

To show all available modules (as of Sept 30, 2013):

-bash-4.2$ module avail

-------------------- /opt/ohpc/pub/moduledeps/gnu8-openmpi3 --------------------
   boost/1.70.0    netcdf/4.6.3    pnetcdf/1.11.1
   fftw/3.3.8      phdf5/1.10.5    py3-scipy/1.2.1

------------------------ /opt/ohpc/pub/moduledeps/gnu8 -------------------------
   R/3.5.3        mpich/3.3.1       openblas/0.3.5        py3-numpy/1.15.3
   hdf5/1.10.5    mvapich2/2.3.1    openmpi3/3.1.4 (L)

-------------------------- /opt/ohpc/pub/modulefiles ---------------------------
   autotools          (L)    intel/19.0.2.187        prun/1.3        (L)
   clustershell/1.8.1        julia/1.2.0             valgrind/3.15.0
   cmake/3.14.3              octave/5.1.0            vim/8.1
   gnu8/8.3.0         (L)    ohpc             (L)    visit/3.0.1
   gurobi/8.1.1              pmix/2.2.2

  Where:
   L:  Module is loaded

To load a module and verify:

-bash-4.2$ module load R/3.5.3 
-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   3) gnu8/8.3.0       5) ohpc             7) R/3.5.3
  2) prun/1.3    4) openmpi3/3.1.4   6) openblas/0.3.5

To unload a module and verify:

-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc

Managing Modules in Your Python Virtual Environment

Both python2 (2.7) and python3 (3.6) are installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see the documentation on virtual environments on python.org for details.

Create Virtual Environment

You can create as many virtual environments, each in their own directory, as needed.

  • python2: python -m virtualenv <your virtual environment directory>
  • python3: python3 -m env <your virtual environment directory>

Activate Virtual Environment

You need to activate a virtual environment before using it:

source <your virtual environment directory>/bin/activate

Install python modules Using pip

After activating your virtual environment, you can now install python modules for the activated environment:

  • It's always a good idea to update pip first:
pip install --upgrade pip
  • Install the module:
pip install <module name>
  • List installed python modules in the environment:
pip list modules

Software List

Software Path Notes
Intel Compilers
(including MKL
but NOT impi compilers)
/opt/ohpc/pub/compiler/intel/2019/
  • module load intel/19.0.2.187
  • IMPI compilers (mpiicc, etc) are not included
  • IMPI runtimes are included.
gcc 8.3
/opt/ohpc/pub/compiler/gcc/8.3.0/
  • module load gnu8/8.3.0 (Loaded by default)
Openmpi 3.1.4
/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4
  • module load openmpi3/3.1.4 (Loaded by default)
Boost 1.70.0
/opt/ohpc/pub/libs/gnu8/openmpi3/boost/1.70.0
  • module load boost/1.70.0
cmake 3.14.3
/opt/ohpc/pub/utils/cmake/3.14.3
  • module load cmake/3.14.3
hdf5 1.10.5
/opt/ohpc/pub/libs/gnu8/hdf5/1.10.5
  • module load hdf5/1.10.5
octave 5.1.0
/opt/ohpc/pub/apps/octave/5.1.0
  • module load octave/5.1.0
netcdf 4.6.3
/opt/ohpc/pub/libs/gnu8/openmpi3/netcdf/4.6.3
  • module load netcdf/4.6.3
fftw 3.3.8
/opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8
  • module load fftw/3.3.8
valgrind 3.15.0
/opt/ohpc/pub/utils/valgrind/3.15.0
  • module load valgrind/3.15.0
visit 3.0.1
/opt/ohpc/pub/apps/visit/3.0.1
  • module load visit/3.0.1
R 3.5.3
/opt/ohpc/pub/libs/gnu8/R/3.5.3
  • module load R/3.5.3
openblas 0.3.5
/opt/ohpc/pub/libs/gnu8/openblas/0.3.5
  • module load openblas/0.3.5
vim 8.1
/opt/ohpc/pub/apps/vim/8.1
  • module load vim/8.1
julia 1.2.0
/opt/ohpc/pub/compiler/julia/1.2.0
  • module load julia/1.2.0
gurobi 8.1.1
/opt/ohpc/pub/apps/gurobi/8.1.1
  • module load gurobi/8.1.1
  • Create a ~/gurobi.lic file with the following line:
TOKENSERVER=infrastructure2.tc.cornell.edu
  • To use gurobi in your python code:
    1. module load gurobi/8.1.1
    2. Activate your python virtual environment.
    3. python /opt/ohpc/pub/apps/gurobi/8.1.1/setup.py install
    4. Now you can import the gurobipy class in your python code

Help