Difference between revisions of "AIDA Cluster"

From CAC Documentation wiki
Jump to navigation Jump to search
Line 1: Line 1:
 
== AIDA General Information ==
 
== AIDA General Information ==
 
:* Aida is a private cluster with restricted access to members of bs54_0001 and rab38_0001 groups.
 
:* Aida is a private cluster with restricted access to members of bs54_0001 and rab38_0001 groups.
 +
:* Head node:  '''aida.cac.cornell.edu'''  (access via ssh)
 
:*  6 compute nodes with V100 GPUs (c0017-c0022)
 
:*  6 compute nodes with V100 GPUs (c0017-c0022)
 
:*  6 compute nodes with A100 GPUs (c0071-c0076)
 
:*  6 compute nodes with A100 GPUs (c0071-c0076)
 
:*  many nodes from the former Atlas2 cluster
 
:*  many nodes from the former Atlas2 cluster
:* Head node:  '''aida.cac.cornell.edu'''  (access via ssh)
+
 
  
 
:* Please send any questions and report problems to:  [mailto:cac-help@cornell.edu cac-help@cornell.edu]  
 
:* Please send any questions and report problems to:  [mailto:cac-help@cornell.edu cac-help@cornell.edu]  
Line 13: Line 14:
 
==== Hardware  ====
 
==== Hardware  ====
  
:* All nodes below are a Xeon generations that supports vector extensions up to AVX-512
+
:* All nodes below are Intel Xeon generations that supports vector extensions up to AVX-512
 
:* All nodes below have hyperthreading is turned on.  
 
:* All nodes below have hyperthreading is turned on.  
  
 
c00[17-22]:  
 
c00[17-22]:  
     2x18 core Xeon Skylake 6154 CPUs with base clock 3GHz (turbo up to 3.7GHz)
+
     2x18 core Intel Xeon Skylake 6154 CPUs with base clock 3GHz (turbo up to 3.7GHz)
  
 
c0017: x5 GPU/Nvidia Tesla V100 16GB
 
c0017: x5 GPU/Nvidia Tesla V100 16GB
Line 34: Line 35:
  
 
c00[71-76]:
 
c00[71-76]:
     2x28 core Xeon Gold 6348 CPUs with base clock 2.6GHz
+
     2x28 core Intel Xeon Ice Lake Gold 6348 CPUs with base clock 2.6GHz
 
     x4 GPU/Nvidia Tesla A100 80GB
 
     x4 GPU/Nvidia Tesla A100 80GB
 
     Memory: 1TB
 
     Memory: 1TB
Line 40: Line 41:
 
     /tmp: 3TB
 
     /tmp: 3TB
  
== Partitions ==
+
== SLURM Partitions ==
  
'''For detailed information and a quick-start guide, see the [[Slurm]] page.'''
+
'''For detailed information and a quick-start SLURM guide, see the [[Slurm]] page.'''

Revision as of 06:49, 22 September 2022

AIDA General Information

  • Aida is a private cluster with restricted access to members of bs54_0001 and rab38_0001 groups.
  • Head node: aida.cac.cornell.edu (access via ssh)
  • 6 compute nodes with V100 GPUs (c0017-c0022)
  • 6 compute nodes with A100 GPUs (c0071-c0076)
  • many nodes from the former Atlas2 cluster


Networking

Hardware

  • All nodes below are Intel Xeon generations that supports vector extensions up to AVX-512
  • All nodes below have hyperthreading is turned on.

c00[17-22]:

    2x18 core Intel Xeon Skylake 6154 CPUs with base clock 3GHz (turbo up to 3.7GHz)

c0017: x5 GPU/Nvidia Tesla V100 16GB

    Memory: 754GB
    swap: 187GB
    /tmp: 700GB

c00[18-21]: x5 GPU/Nvidia Tesla V100 16GB

     Memory: 376GB
     swap: 187GB
     /tmp: 700GB

c0022: x2 GPU/Nvidia Tesla V100 16GB

     Memory: 1.5TB 
     swap: 187GB
     /tmp: 100GB
     /scratch: 1TB

c00[71-76]:

    2x28 core Intel Xeon Ice Lake Gold 6348 CPUs with base clock 2.6GHz
    x4 GPU/Nvidia Tesla A100 80GB
    Memory: 1TB
    swap: 187GB
    /tmp: 3TB

SLURM Partitions

For detailed information and a quick-start SLURM guide, see the Slurm page.