ASTRA General Information
- ASTRA is a private cluster with restricted access to the na346_0001 group.
- Rocks 6.0 with CentOS 6.2
- Cluster Status: Ganglia.
- --> Submit HELP requests: help OR by sending email to: email@example.com
- ASTRA has one head node (astra.cac.cornell.edu) and 40 compute nodes (compute-1-[1-40]).
- Each compute node:
- 32GB of RAM, 883GB /tmp
- 12 core, Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
- 12/5/12: hyperthreading was turned off on all compute nodes.
- The headnode, astra, contains:
- the cluster scheduler maui combined with torque as the resource manager.
- Rocks cluster server deployment software & database
- /home (11TB) directory server (nfs exported to all cluster nodes)
- /tmp (39GB)
- Each compute node:
How to Login, create my first job Getting Started on the astra cluster
GROMACS and Gaussian
These software packages can be found in /opt, but to run them, you should copy (cp -r) the relevant main directory from /opt to your home directory. You can then modify any auxiliary files as you see fit.
To get the best performance, arrange for your batch job to do all its I/O to the local /tmp of a compute node. Thus, have your batch script copy all the input files to $TMPDIR (or to another directory in /tmp that is created by your script) at the beginning of your job. Then, at the end of your job, copy the output files back to your home directory.
- A set of test files can be found in /opt/gromacs/share/gromacs/tutor/water.
- Remember to put this line in your batch script (or in .profile) prior to running GROMACS: export LD_LIBRARY_PATH=~/gromacs/lib:$LD_LIBRARY_PATH
- Your private copy cannot be accessible to others (by the terms of the license), so fix your permissions like this to avoid an error: chmod -R o-rwx ~/g09-C.01
- In your batch job, change the location of Gaussian to point to your private copy: export GAUSS_EXEDIR=/home/fs01/myuserid/g09-C.01
How to run jobs
At this time astra only has 2 queues. Each node has 32GB ram and 883GB /tmp:
- default (also the default queue if no queue specified)
- Nodes: compute-1-[3-40]
- Limits: walltime limit: 336 hours
- Nodes: compute-1-[1-2]
- Limits: walltime limit: 12 hours