Difference between revisions of "THECUBE Cluster"

From CAC Documentation wiki
Jump to navigation Jump to search
 
(26 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
This is a private cluster.
 
This is a private cluster.
  
==Hardware==
+
=Hardware=
 
:* Head node:  '''thecube.cac.cornell.edu'''.
 
:* Head node:  '''thecube.cac.cornell.edu'''.
 
:* access modes: ssh
 
:* access modes: ssh
 
:* OpenHPC v1.3.8 with CentOS 7.6
 
:* OpenHPC v1.3.8 with CentOS 7.6
:* 32 compute nodes with Dual 8-core E5-2680 CPUs @ 2.7 GHz, 128 GB of RAM
+
:* 32 compute nodes (c0001-c0032) with dual 8-core Intel E5-2680 CPUs @ 2.7 GHz, 128 GB of RAM
 +
:* Hyperthreading is enabled on all nodes, i.e., each physical core is considered to consist of two logical CPUs.
 
:* THECUBE Cluster Status: [http://thecube.cac.cornell.edu/ganglia/ Ganglia].
 
:* THECUBE Cluster Status: [http://thecube.cac.cornell.edu/ganglia/ Ganglia].
 
:* Submit HELP requests: [https://{{SERVERNAME}}/help help] OR by sending an email to [mailto:help@cac.cornell.edu CAC support] please include THECUBE in the subject area.
 
:* Submit HELP requests: [https://{{SERVERNAME}}/help help] OR by sending an email to [mailto:help@cac.cornell.edu CAC support] please include THECUBE in the subject area.
  
==File Systems==
+
=File Systems=
===Home Directories===
+
==Home Directories==
 
:* Path: ~
 
:* Path: ~
  
Line 17: Line 18:
 
'''Unless special arrangements are made, data in user's home directories are NOT backed up.'''
 
'''Unless special arrangements are made, data in user's home directories are NOT backed up.'''
  
===Scratch File System===
+
==Scratch File System==
 
LUSTRE file system runs Intel Lustre 2.7:
 
LUSTRE file system runs Intel Lustre 2.7:
 
:* Path: /scratch/<user name>
 
:* Path: /scratch/<user name>
Line 23: Line 24:
 
The scratch file system is a fast parallel file system.  Use this file system for scratch space for your jobs.  Copy the results you want to keep back to your home directory for safe keeping.
 
The scratch file system is a fast parallel file system.  Use this file system for scratch space for your jobs.  Copy the results you want to keep back to your home directory for safe keeping.
  
==Scheduler/Queues==
+
=Scheduler/Queues=
:* Slurm scheduler
+
:* The cluster scheduler is Slurm. All nodes are configured to be in the "normal" partition with no time limits. See [[ slurm | Slurm documentation page ]] for details.
:* Queues:
+
:* Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
 +
:* Partitions (queues):
 
::{| border="1" cellspacing="0" cellpadding="10"
 
::{| border="1" cellspacing="0" cellpadding="10"
 
! Name
 
! Name
Line 31: Line 33:
 
! Time Limit
 
! Time Limit
 
|-
 
|-
| default
+
| normal
 
| all nodes
 
| all nodes
 
| no limit
 
| no limit
Line 38: Line 40:
 
=Software=
 
=Software=
  
==Working with Environment Modules==
+
==Work with Environment Modules==
  
 
Set up the working environment for each package using the module command.   
 
Set up the working environment for each package using the module command.   
Line 93: Line 95:
 
</pre>
 
</pre>
  
== Managing Modules in Your Python Virtual Environment ==
 
  
 +
== Install R Packages in Home Directory ==
 +
If you need a new R package not installed on the system, you can [[Install R Packages in Your Home Directory | install R packages in your home directory using these instructions]].
  
 +
== Manage Modules in Your Python Virtual Environment ==
 +
 +
Both python2 (2.7) and python3 (3.6) are installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see [https://packaging.python.org/guides/installing-using-pip-and-virtual-environments the documentation on virtual environments on python.org] for details.
 +
 +
=== Create Virtual Environment ===
 +
 +
You can '''create''' as many virtual environments, each in their own directory, as needed.
 +
 +
* python2: <code>python -m virtualenv <your virtual environment directory></code>
 +
 +
* python3: <code>python3 -m venv <your virtual environment directory></code>
 +
 +
=== Activate Virtual Environment ===
 +
 +
You need to '''activate''' a virtual environment before using it:
 +
 +
<pre>source <your virtual environment directory>/bin/activate</pre>
 +
 +
=== Install Python Modules Using pip ===
 +
 +
After activating your virtual environment, you can now install python modules for the activated environment:
 +
 +
* It's always a good idea to update <code>pip</code> first:
 +
<pre>pip install --upgrade pip</pre>
 +
 +
* Install the module:
 +
<pre>pip install <module name></pre>
 +
 +
* List installed python modules in the environment:
 +
<pre>pip list modules</pre>
 +
 +
* Examples: Install <code>tensorflow</code> and <code>keras</code> like this:
 +
 +
<pre>
 +
-bash-4.2$ python3 -m venv tensorflow
 +
-bash-4.2$ source tensorflow/bin/activate
 +
(tensorflow) -bash-4.2$ pip install --upgrade pip
 +
Collecting pip
 +
  Using cached https://files.pythonhosted.org/packages/30/db/9e38760b32e3e7f40cce46dd5fb107b8c73840df38f0046d8e6514e675a1/pip-19.2.3-py2.py3-none-any.whl
 +
Installing collected packages: pip
 +
  Found existing installation: pip 18.1
 +
    Uninstalling pip-18.1:
 +
      Successfully uninstalled pip-18.1
 +
Successfully installed pip-19.2.3
 +
(tensorflow) -bash-4.2$ pip install tensorflow keras
 +
Collecting tensorflow
 +
  Using cached https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl
 +
:
 +
:
 +
:
 +
Successfully installed absl-py-0.8.0 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-2.2.5 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 numpy-1.17.1 protobuf-3.9.1 pyyaml-5.1.2 scipy-1.3.1 six-1.12.0 tensorboard-1.14.0 tensorflow-1.14.0 tensorflow-estimator-1.14.0 termcolor-1.1.0 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2
 +
(tensorflow) -bash-4.2$ pip list modules
 +
Package              Version
 +
-------------------- -------
 +
absl-py              0.8.0 
 +
astor                0.8.0 
 +
gast                0.2.2 
 +
google-pasta        0.1.7 
 +
grpcio              1.23.0
 +
h5py                2.9.0 
 +
Keras                2.2.5 
 +
Keras-Applications  1.0.8 
 +
Keras-Preprocessing  1.1.0 
 +
Markdown            3.1.1 
 +
numpy                1.17.1
 +
pip                  19.2.3
 +
protobuf            3.9.1 
 +
PyYAML              5.1.2 
 +
scipy                1.3.1 
 +
setuptools          40.6.2
 +
six                  1.12.0
 +
tensorboard          1.14.0
 +
tensorflow          1.14.0
 +
tensorflow-estimator 1.14.0
 +
termcolor            1.1.0 
 +
Werkzeug            0.15.5
 +
wheel                0.33.6
 +
wrapt                1.11.2
 +
</pre>
  
 
== Software List ==
 
== Software List ==
Line 103: Line 185:
 
! Notes
 
! Notes
 
|-
 
|-
| Intel Compilers <br> (including MKL <br> but NOT impi compilers) || <pre>/opt/ohpc/pub/compiler/intel/2019/</pre>
+
| Intel Compilers, MPI, and MKL || <pre>/opt/ohpc/pub/compiler/intel/2019/</pre>
 
|
 
|
* module load intel/19.0.2.187
+
* module unload gnu8; module load intel/19.0.2.187; module load impi/2019.2.187
* IMPI compilers (mpiicc, etc) are not included
 
* IMPI runtimes are included.
 
 
|-
 
|-
 
| gcc 8.3 || <pre>/opt/ohpc/pub/compiler/gcc/8.3.0/</pre>
 
| gcc 8.3 || <pre>/opt/ohpc/pub/compiler/gcc/8.3.0/</pre>
Line 116: Line 196:
 
|
 
|
 
* module load openmpi3/3.1.4 (Loaded by default)
 
* module load openmpi3/3.1.4 (Loaded by default)
 +
|-
 +
| python 3.9.4 || <pre>/opt/ohpc/pub/utils/python/3.9.4</pre>
 +
|
 +
* module load python/3.6.9
 +
|-
 +
| python 2.7.16 || <pre>/opt/ohpc/pub/utils/python/2.7.16</pre>
 +
|
 +
* module load python/2.7.16
 +
|-
 +
| perl 5.30.1 || <pre>/opt/ohpc/pub/utils/perl/5.30.1</pre>
 +
|
 +
* module load perl/5.30.1
 
|-
 
|-
 
| Boost 1.70.0  || <pre>/opt/ohpc/pub/libs/gnu8/openmpi3/boost/1.70.0</pre>
 
| Boost 1.70.0  || <pre>/opt/ohpc/pub/libs/gnu8/openmpi3/boost/1.70.0</pre>
Line 133: Line 225:
 
* module load octave/5.1.0
 
* module load octave/5.1.0
 
|-
 
|-
| netcdf 4.6.3 || <pre>/opt/ohpc/pub/libs/gnu8/openmpi3/netcdf</pre>
+
| netcdf 4.6.3 || <pre>/opt/ohpc/pub/libs/gnu8/openmpi3/netcdf/4.6.3</pre>
 
|
 
|
 
* module load  netcdf/4.6.3
 
* module load  netcdf/4.6.3
 +
|-
 +
| fftw 3.3.8 || <pre>/opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8</pre>
 +
|
 +
* module load  fftw/3.3.8
 
|-
 
|-
 
| valgrind 3.15.0 || <pre>/opt/ohpc/pub/utils/valgrind/3.15.0</pre>
 
| valgrind 3.15.0 || <pre>/opt/ohpc/pub/utils/valgrind/3.15.0</pre>
Line 168: Line 264:
 
TOKENSERVER=infrastructure2.tc.cornell.edu
 
TOKENSERVER=infrastructure2.tc.cornell.edu
 
</pre>
 
</pre>
* To use gurobi in your python code:
+
* <code>gurobipy</code> is installed in python-3.6.9. You can use it by loading that module.
*# <pre>module load gurobi/8.1.1</pre>
+
|-
*# Activate your python virtual environment.
+
| remora 1.8.3 || <pre>/opt/ohpc/pub/apps/remora/1.8.3</pre>
*# <pre>python /opt/ohpc/pub/apps/gurobi/8.1.1/setup.py install</pre>
+
|
*# Now you can import the <code>gurobipy</code> class in your python code
+
* module load remora/1.8.3
 +
|-
 +
| GMAT R2019aBeta1 || <pre>/opt/ohpc/pub/apps/GMAT/R2019aBeta1</pre>
 +
|
 +
* module load GMAT/R2019aBeta1
 
|}
 
|}
  
==Quick Tutorial==
+
=Help=
The batch system treats each core of a node as a "virtual processor." That means the nodes keyword in batch scripts refers to the number of cores that are scheduled.
+
:* [http://thecube.cac.cornell.edu/ganglia Cluster Status].
 
+
:* Submit questions or requests at [https://www.cac.cornell.edu/help help] or by sending email to: [mailto:help@cac.cornell.edu help@cac.cornell.edu]. Please include THECUBE in the subject area.
===Running an MPI Job on the Whole Cluster===
 
:*We are assuming /opt/openmpi/ is the default, which it is on thecube cluster.  The mpiexec options may change depending on your selected MPI.
 
:*First use showq to see how many cores are available. It may be less than 512 if a node is down.
 
<pre>
 
-sh-4.1$ showq
 
ACTIVE JOBS--------------------
 
JOBNAME            USERNAME      STATE  PROC  REMAINING            STARTTIME
 
 
 
 
 
    0 Active Jobs      0 of  512 Processors Active (0.00%)
 
                        0 of  32 Nodes Active      (0.00%)
 
 
 
IDLE JOBS----------------------
 
JOBNAME            USERNAME      STATE  PROC    WCLIMIT            QUEUETIME
 
 
 
 
 
0 Idle Jobs
 
 
 
BLOCKED JOBS----------------
 
JOBNAME            USERNAME      STATE  PROC    WCLIMIT            QUEUETIME
 
 
 
 
 
Total Jobs: 0  Active Jobs: 0  Idle Jobs: 0  Blocked Jobs: 0
 
</pre>
 
 
 
:*Next create a script (using your favorite editor ex. vim) named runmyfile.sh that contains the following lines of code:
 
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=32:ppn=16    (note, this is PBS -l (small case L))
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
 
 
mpiexec --hostfile $PBS_NODEFILE <executable>    (substitute executable for the program you wish to run)
 
</source>
 
:*Submit the job to the cluster
 
<pre>
 
-sh-4.1$qsub runmyfile.sh
 
</pre>
 
:*Look for the output file in a file named test.
 
 
 
===Running an MPI Job using 16 Tasks Per Node===
 
Because the nodes have 16 physical cores, you may want to limit jobs to 16 tasks per node.
 
The node file lists each node 1 time, so make a copy with each node listed 16 times, and
 
hand that version to MPI.
 
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=4:ppn=16
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
 
 
# Construct a copy of the hostfile with only 16 entries per node.
 
# MPI can use this to run 16 tasks on each node.
 
uniq "$PBS_NODEFILE"|awk '{for(i=0;i<16;i+=1) print}'>nodefile.16way
 
 
 
# to Run 16-way on 4 nodes, we request 64 core to obtain 4 nodes
 
mpiexec --hostfile nodefile.16way ring -v
 
</source>
 
 
 
===Running Many Copies of a Serial Job===
 
In order to run 30 separate instances of the same program, use the scheduler's task array feature, through the "-t" option. The "nodes" parameter here refers to a core.
 
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=1  (note, this is PBS -l (small case L))
 
#PBS -t 30
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
echo Run my job.
 
</source>
 
 
 
When you start jobs this way, separate jobs will pile one-per-core onto nodes like a box of hamsters.
 
 
 
===Running on a specific node===
 
To run on a specific node use the host= option
 
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l host=compute-1-16      (note, this is PBS -l (small case L))
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
echo Run my job.
 
</source>
 
===Running an interactive job===
 
from the command line:
 
qsub -l nodes=1 -I
 
 
 
=== Running a Hybrid MPI/OpenMP Job ===
 
 
 
Suppose you wanted to run a simple "Hello World" program ([http://www.slac.stanford.edu/comp/unix/farm/mpi_and_openmp.html based on this one]) called "hello.c" located in your home directory.
 
 
 
First compile the code: <code>mpicc -fopenmp hello.c -o hello</code>.  Next, set up your job script with the number of nodes and processes you want.  The following script will give exclusive access to 2 nodes because it specifies 16 ppn, which means it will use all 16 cores on each node.  It also specifies using 8 OpenMP threads, 2 processes (or tasks) per node, and 4 total MPI processes.  You can vary these numbers for your purposes.
 
 
 
<source lang="bash">
 
#!/bin/sh
 
#PBS -l nodes=2:ppn=16
 
#PBS -N test
 
#PBS -j oe
 
#PBS -S /bin/bash
 
 
 
set -x
 
cd "$PBS_O_WORKDIR"
 
 
 
export OMP_NUM_THREADS=8
 
 
 
# Construct a copy of the hostfile with only 2 entries per node.
 
# MPI can use this to run 2 tasks on each node.
 
export TASKS_PER_NODE=2
 
uniq "$PBS_NODEFILE"|awk -v TASKS_PER_NODE="$TASKS_PER_NODE" '{for(i=0;i<TASKS_PER_NODE;i+=1) print}' > nodefile
 
 
 
cat nodefile
 
 
 
# to Run 16-way on 4 nodes, we request 64 core to obtain 4 nodes
 
mpiexec --hostfile nodefile -np 4 -x OMP_NUM_THREADS hello
 
</source>
 
 
 
==HELP==
 
:* THECUBE Cluster Status: [http://thecube.cac.cornell.edu/ganglia/ Ganglia].
 
:* Submit HELP requests: [https://{{SERVERNAME}}/help help] OR by sending email to: help@cac.cornell.edu, please include THECUBE in the subject area.
 

Latest revision as of 14:07, 5 April 2021

This is a private cluster.

Hardware

  • Head node: thecube.cac.cornell.edu.
  • access modes: ssh
  • OpenHPC v1.3.8 with CentOS 7.6
  • 32 compute nodes (c0001-c0032) with dual 8-core Intel E5-2680 CPUs @ 2.7 GHz, 128 GB of RAM
  • Hyperthreading is enabled on all nodes, i.e., each physical core is considered to consist of two logical CPUs.
  • THECUBE Cluster Status: Ganglia.
  • Submit HELP requests: help OR by sending an email to CAC support please include THECUBE in the subject area.

File Systems

Home Directories

  • Path: ~

User home directories is located on a NFS export from the head node. Use your home directory (~) for archiving the data you wish to keep. Do NOT use this file system for computation as bandwidth to the compute nodes is very limited and will quickly be overwhelmed by file I/Os from large jobs.

Unless special arrangements are made, data in user's home directories are NOT backed up.

Scratch File System

LUSTRE file system runs Intel Lustre 2.7:

  • Path: /scratch/<user name>

The scratch file system is a fast parallel file system. Use this file system for scratch space for your jobs. Copy the results you want to keep back to your home directory for safe keeping.

Scheduler/Queues

  • The cluster scheduler is Slurm. All nodes are configured to be in the "normal" partition with no time limits. See Slurm documentation page for details.
  • Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.
  • Partitions (queues):
Name Description Time Limit
normal all nodes no limit

Software

Work with Environment Modules

Set up the working environment for each package using the module command. The module command will activate dependent modules if there are any.

To show currently loaded modules: (These modules are loaded by default system configurations)

-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc

To show all available modules (as of Sept 30, 2013):

-bash-4.2$ module avail

-------------------- /opt/ohpc/pub/moduledeps/gnu8-openmpi3 --------------------
   boost/1.70.0    netcdf/4.6.3    pnetcdf/1.11.1
   fftw/3.3.8      phdf5/1.10.5    py3-scipy/1.2.1

------------------------ /opt/ohpc/pub/moduledeps/gnu8 -------------------------
   R/3.5.3        mpich/3.3.1       openblas/0.3.5        py3-numpy/1.15.3
   hdf5/1.10.5    mvapich2/2.3.1    openmpi3/3.1.4 (L)

-------------------------- /opt/ohpc/pub/modulefiles ---------------------------
   autotools          (L)    intel/19.0.2.187        prun/1.3        (L)
   clustershell/1.8.1        julia/1.2.0             valgrind/3.15.0
   cmake/3.14.3              octave/5.1.0            vim/8.1
   gnu8/8.3.0         (L)    ohpc             (L)    visit/3.0.1
   gurobi/8.1.1              pmix/2.2.2

  Where:
   L:  Module is loaded

To load a module and verify:

-bash-4.2$ module load R/3.5.3 
-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   3) gnu8/8.3.0       5) ohpc             7) R/3.5.3
  2) prun/1.3    4) openmpi3/3.1.4   6) openblas/0.3.5

To unload a module and verify:

-bash-4.2$ module list

Currently Loaded Modules:
  1) autotools   2) prun/1.3   3) gnu8/8.3.0   4) openmpi3/3.1.4   5) ohpc


Install R Packages in Home Directory

If you need a new R package not installed on the system, you can install R packages in your home directory using these instructions.

Manage Modules in Your Python Virtual Environment

Both python2 (2.7) and python3 (3.6) are installed. Users can manage their own python environment (including installing needed modules) using virtual environments. Please see the documentation on virtual environments on python.org for details.

Create Virtual Environment

You can create as many virtual environments, each in their own directory, as needed.

  • python2: python -m virtualenv <your virtual environment directory>
  • python3: python3 -m venv <your virtual environment directory>

Activate Virtual Environment

You need to activate a virtual environment before using it:

source <your virtual environment directory>/bin/activate

Install Python Modules Using pip

After activating your virtual environment, you can now install python modules for the activated environment:

  • It's always a good idea to update pip first:
pip install --upgrade pip
  • Install the module:
pip install <module name>
  • List installed python modules in the environment:
pip list modules
  • Examples: Install tensorflow and keras like this:
-bash-4.2$ python3 -m venv tensorflow
-bash-4.2$ source tensorflow/bin/activate
(tensorflow) -bash-4.2$ pip install --upgrade pip
Collecting pip
  Using cached https://files.pythonhosted.org/packages/30/db/9e38760b32e3e7f40cce46dd5fb107b8c73840df38f0046d8e6514e675a1/pip-19.2.3-py2.py3-none-any.whl
Installing collected packages: pip
  Found existing installation: pip 18.1
    Uninstalling pip-18.1:
      Successfully uninstalled pip-18.1
Successfully installed pip-19.2.3
(tensorflow) -bash-4.2$ pip install tensorflow keras
Collecting tensorflow
  Using cached https://files.pythonhosted.org/packages/de/f0/96fb2e0412ae9692dbf400e5b04432885f677ad6241c088ccc5fe7724d69/tensorflow-1.14.0-cp36-cp36m-manylinux1_x86_64.whl
:
:
:
Successfully installed absl-py-0.8.0 astor-0.8.0 gast-0.2.2 google-pasta-0.1.7 grpcio-1.23.0 h5py-2.9.0 keras-2.2.5 keras-applications-1.0.8 keras-preprocessing-1.1.0 markdown-3.1.1 numpy-1.17.1 protobuf-3.9.1 pyyaml-5.1.2 scipy-1.3.1 six-1.12.0 tensorboard-1.14.0 tensorflow-1.14.0 tensorflow-estimator-1.14.0 termcolor-1.1.0 werkzeug-0.15.5 wheel-0.33.6 wrapt-1.11.2
(tensorflow) -bash-4.2$ pip list modules
Package              Version
-------------------- -------
absl-py              0.8.0  
astor                0.8.0  
gast                 0.2.2  
google-pasta         0.1.7  
grpcio               1.23.0 
h5py                 2.9.0  
Keras                2.2.5  
Keras-Applications   1.0.8  
Keras-Preprocessing  1.1.0  
Markdown             3.1.1  
numpy                1.17.1 
pip                  19.2.3 
protobuf             3.9.1  
PyYAML               5.1.2  
scipy                1.3.1  
setuptools           40.6.2 
six                  1.12.0 
tensorboard          1.14.0 
tensorflow           1.14.0 
tensorflow-estimator 1.14.0 
termcolor            1.1.0  
Werkzeug             0.15.5 
wheel                0.33.6 
wrapt                1.11.2 

Software List

Software Path Notes
Intel Compilers, MPI, and MKL
/opt/ohpc/pub/compiler/intel/2019/
  • module unload gnu8; module load intel/19.0.2.187; module load impi/2019.2.187
gcc 8.3
/opt/ohpc/pub/compiler/gcc/8.3.0/
  • module load gnu8/8.3.0 (Loaded by default)
Openmpi 3.1.4
/opt/ohpc/pub/mpi/openmpi3-gnu8/3.1.4
  • module load openmpi3/3.1.4 (Loaded by default)
python 3.9.4
/opt/ohpc/pub/utils/python/3.9.4
  • module load python/3.6.9
python 2.7.16
/opt/ohpc/pub/utils/python/2.7.16
  • module load python/2.7.16
perl 5.30.1
/opt/ohpc/pub/utils/perl/5.30.1
  • module load perl/5.30.1
Boost 1.70.0
/opt/ohpc/pub/libs/gnu8/openmpi3/boost/1.70.0
  • module load boost/1.70.0
cmake 3.14.3
/opt/ohpc/pub/utils/cmake/3.14.3
  • module load cmake/3.14.3
hdf5 1.10.5
/opt/ohpc/pub/libs/gnu8/hdf5/1.10.5
  • module load hdf5/1.10.5
octave 5.1.0
/opt/ohpc/pub/apps/octave/5.1.0
  • module load octave/5.1.0
netcdf 4.6.3
/opt/ohpc/pub/libs/gnu8/openmpi3/netcdf/4.6.3
  • module load netcdf/4.6.3
fftw 3.3.8
/opt/ohpc/pub/libs/gnu8/openmpi3/fftw/3.3.8
  • module load fftw/3.3.8
valgrind 3.15.0
/opt/ohpc/pub/utils/valgrind/3.15.0
  • module load valgrind/3.15.0
visit 3.0.1
/opt/ohpc/pub/apps/visit/3.0.1
  • module load visit/3.0.1
R 3.5.3
/opt/ohpc/pub/libs/gnu8/R/3.5.3
  • module load R/3.5.3
openblas 0.3.5
/opt/ohpc/pub/libs/gnu8/openblas/0.3.5
  • module load openblas/0.3.5
vim 8.1
/opt/ohpc/pub/apps/vim/8.1
  • module load vim/8.1
julia 1.2.0
/opt/ohpc/pub/compiler/julia/1.2.0
  • module load julia/1.2.0
gurobi 8.1.1
/opt/ohpc/pub/apps/gurobi/8.1.1
  • module load gurobi/8.1.1
  • Create a ~/gurobi.lic file with the following line:
TOKENSERVER=infrastructure2.tc.cornell.edu
  • gurobipy is installed in python-3.6.9. You can use it by loading that module.
remora 1.8.3
/opt/ohpc/pub/apps/remora/1.8.3
  • module load remora/1.8.3
GMAT R2019aBeta1
/opt/ohpc/pub/apps/GMAT/R2019aBeta1
  • module load GMAT/R2019aBeta1

Help