https://www.cac.cornell.edu/wiki/api.php?action=feedcontributions&user=Jhs43&feedformat=atomCAC Documentation wiki - User contributions [en]2024-03-29T06:17:24ZUser contributionsMediaWiki 1.35.5https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4224Cayuga Cluster2024-02-27T20:48:08Z<p>Jhs43: /* Connect to the Cayuga Cluster */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' (click on the copy button) to your computer(s) into a keyfile-private-file (name the file whatever you would like). You will use this keyfile-private-file to access cayuga-login1.cac.cornell.edu. <br />
<br />
* Move the keyfile-private-file into your homedir/.ssh directory on your personal computer (If you do not have a ~/.ssh directory: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write (600) only for keyfile-private-file and read-write-execute (700) for the .ssh directory or you will not have success in using it to access cayuga-login1 (chmod 600 ~/.ssh/keyfile-private-file). <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* '''connect to either the Weill or Cornell Ithaca VPN (required)'''<br />
References for setting up the Weill VPN:<br />
(Windows): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012185<br />
(Mac): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012172<br />
<br />
References for setting up the Cornell Ithaca VPN:<br />
(Windows): https://it.cornell.edu/cuvpn/connect-windows-cu-vpn<br />
(Mac): https://it.cornell.edu/cuvpn/connect-mac-cu-vpn<br />
<br />
* '''ssh -i <keyname-private-file> <UserID>@cayuga-login1.cac.cornell.edu'''<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4223Cayuga Cluster2024-02-27T20:46:03Z<p>Jhs43: </p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' (click on the copy button) to your computer(s) into a keyfile-private-file (name the file whatever you would like). You will use this keyfile-private-file to access cayuga-login1.cac.cornell.edu. <br />
<br />
* Move the keyfile-private-file into your homedir/.ssh directory on your personal computer (If you do not have a ~/.ssh directory: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write (600) only for keyfile-private-file and read-write-execute (700) for the .ssh directory or you will not have success in using it to access cayuga-login1 (chmod 600 ~/.ssh/keyfile-private-file). <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
References for setting up the Weill VPN:<br />
(Windows): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012185<br />
(Mac): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012172<br />
<br />
References for setting up the Cornell Ithaca VPN:<br />
(Windows): https://it.cornell.edu/cuvpn/connect-windows-cu-vpn<br />
(Mac): https://it.cornell.edu/cuvpn/connect-mac-cu-vpn<br />
<br />
* '''ssh -i <keyname-private-file> <UserID>@cayuga-login1.cac.cornell.edu'''<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4217Cayuga Cluster2024-01-12T18:12:36Z<p>Jhs43: /* Connect to the Cayuga Cluster */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) into a keyfile-private (name the file whatever you would like) you will use to access cayuga-login1.cac.cornell.edu. Move the keyfile-private into your homedir/.ssh directory (if not there: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write for you only or you will not have success in using it (chmod ~/.ssh/keyfile-private). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
References for setting up the Weill VPN:<br />
(Windows): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012185<br />
(Mac): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012172<br />
<br />
References for setting up the Cornell Ithaca VPN:<br />
(Windows): https://it.cornell.edu/cuvpn/connect-windows-cu-vpn<br />
(Mac): https://it.cornell.edu/cuvpn/connect-mac-cu-vpn<br />
<br />
* '''ssh -i <keyname-private> <UserID>@cayuga-login1.cac.cornell.edu'''<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4216Cayuga Cluster2024-01-12T18:10:13Z<p>Jhs43: /* Connect to the Cayuga Cluster */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) into a keyfile-private (name the file whatever you would like) you will use to access cayuga-login1.cac.cornell.edu. Move the keyfile-private into your homedir/.ssh directory (if not there: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write for you only or you will not have success in using it (chmod ~/.ssh/keyfile-private). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
References for setting up the Weill VPN:<br />
(Windows): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012185<br />
(Mac): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012172<br />
<br />
* '''ssh -i <keyname-private> <UserID>@cayuga-login1.cac.cornell.edu'''<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4215Cayuga Cluster2024-01-12T18:08:14Z<p>Jhs43: /* Connect to the Cayuga Cluster */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) into a keyfile-private (name the file whatever you would like) you will use to access cayuga-login1.cac.cornell.edu. Move the keyfile-private into your homedir/.ssh directory (if not there: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write for you only or you will not have success in using it (chmod ~/.ssh/keyfile-private). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
References for setting up the Weill VPN:<br />
(Windows): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012185<br />
(Mac): https://wcmcprd.service-now.com/kb_view.do?sysparm_article=KB0012172<br />
<br />
* '''ssh -i <keyname-private> <UserID>@cayuga-login1.cac.cornell.edu'''<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4214Cayuga Cluster2024-01-12T18:05:24Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) into a keyfile-private (name the file whatever you would like) you will use to access cayuga-login1.cac.cornell.edu. Move the keyfile-private into your homedir/.ssh directory (if not there: mkdir .ssh; chmod 700 .ssh) Your permissions must be read-write for you only or you will not have success in using it (chmod ~/.ssh/keyfile-private). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4194Cayuga Cluster2023-11-15T14:33:28Z<p>Jhs43: /* Technical Documentation */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount1.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick1.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) you will use to access cayuga-login1.cac.cornell.edu. Most generally into a file in your homedir/.ssh directory with a name similar to cayuga-access-key.private (Be sure your key permissions are: 600 (read-write for you only: chmod 600 your-key-name) and your .ssh directory is 700 (owner you). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation for the cayuga cluster=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4190Cayuga Cluster2023-11-13T22:14:03Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) you will use to access cayuga-login1.cac.cornell.edu. Most generally into a file in your homedir/.ssh directory with a name similar to cayuga-access-key.private (Be sure your key permissions are: 600 (read-write for you only: chmod 600 your-key-name) and your .ssh directory is 700 (owner you). References for ssh are listed below.<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4189Cayuga Cluster2023-11-13T22:12:12Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer(s) you will use to access cayuga-login1.cac.cornell.edu. Most generally into a file in your homedir/.ssh directory with a name similar to cayuga-access-key.private (Be sure your key permissions are: 600 (read-write for you only: chmod 600 your-key-name) and your .ssh directory is 700 (owner you).<br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4188Cayuga Cluster2023-11-13T22:02:12Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''Generate ssh key pair''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* '''Copy the SSH Private Key''' to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4187Cayuga Cluster2023-11-13T21:46:06Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either the 'WCM' or 'CU' Web Login page. <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4186Cayuga Cluster2023-11-13T21:43:57Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either Weill Cornell Medical College or Cornell University for login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
* '''Select the Weill ID or Cornell NetID button''' (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4185Cayuga Cluster2023-11-13T21:41:05Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and '''Continue'''<br />
<br />
* '''Login using your CWID or NetID''' (based on your previous selection, you will be redirected to either Weill Cornell Medical College or Cornell University for login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: '''If you have never setup globus with your account''', you will get '''three prompts/screens''':<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4184Cayuga Cluster2023-11-13T21:39:17Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on your previous selection, you will be redirected to either Weill Cornell Medical College or Cornell University for login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4183Cayuga Cluster2023-11-13T21:36:32Z<p>Jhs43: /* Instructions for setup to access the cayuga cluster start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub'''<br />
<br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4182Cayuga Cluster2023-11-13T21:35:54Z<p>Jhs43: /* Instructions start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions for setup to access the cayuga cluster start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4181Cayuga Cluster2023-11-13T21:31:05Z<p>Jhs43: /* Instructions start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions start here=<br />
* Bypassing the section 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4180Cayuga Cluster2023-11-13T21:29:57Z<p>Jhs43: /* Instructions start here */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions start here=<br />
* Bypassing the section of 'CAC Account Password', scroll down to <b>Credentials for CAC Services</b> and Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4179Cayuga Cluster2023-11-13T21:27:56Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' as listed below in a separate browser window in order to keep following these instructions as you go through the steps: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Credentials for CAC Services]<br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4178Cayuga Cluster2023-11-13T21:25:41Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open the setup of credentials URL''' in a separate browser window in order to keep following these instructions as you go through the steps: a<br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials]<br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4177Cayuga Cluster2023-11-13T21:18:35Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open this URL in a separate browser''' window to assist in following the instructions as you go:<br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials]<br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4176Cayuga Cluster2023-11-13T21:12:55Z<p>Jhs43: /* Access the CAC Accounts page */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
We suggest you may want to '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials]<br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4175Cayuga Cluster2023-11-13T21:08:07Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to the cayuga cluster via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster, only your key. This key will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'). Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration.</p><br />
<br />
=Access the CAC Accounts page=<br />
* '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials] </p><br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4174Cayuga Cluster2023-11-13T21:05:48Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to cayuga via keys starting with the 'Credentials for CAC Services' section. You will NOT use a password to login to this cluster; only the key that will be created within the next steps. (Therefore, bypass the section 'CAC Account Password'. Be sure to complete each step <b>one time</b> for initial access starting with the 'Globus Sub' registration under 'Credentials for CAC Services'.</p><br />
<br />
=Access the CAC Accounts page=<br />
* '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials] </p><br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4173Cayuga Cluster2023-11-13T20:37:40Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to cayuga via keys. You will not use a password to login; only the key that will be created within the next steps. Be sure to complete each step <b>one time</b> for initial access.</p><br />
<br />
=Access the CAC Accounts page=<br />
* '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials] </p><br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4172Cayuga Cluster2023-11-13T20:30:57Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a "Welcome" e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to cayuga via keys. You will not use a password to login, only the key that will be created within the next steps. Be sure to complete each step <b>one time</b> for initial access.</p><br />
<br />
=Access the CAC Accounts page=<br />
* '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials] </p><br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4171Cayuga Cluster2023-11-13T20:30:15Z<p>Jhs43: /* Setting up your Access */</p>
<hr />
<div><br />
__TOC__<br />
This is a private cluster, accessible to only users in <tt>cayuga_xxxx</tt> projects. </p><br />
Access to the Cayuga cluster is restricted to connections from the Cornell Ithaca or Weill VPNs.<br />
<br />
=Setting up your Access=<br />
In order to login to the cayuga cluster, we presume you have received a Welcome e-mail that has pointed you to this website.</p><br />
The following steps will guide you to gain access to cayuga via keys. You will not use a password to login, only the key that will be created within the next steps. Be sure to complete each step <b>one time</b> for initial access.</p><br />
<br />
=Access the CAC Accounts page=<br />
* '''Open this URL in a separate browser''' window so you can follow the instructions below: <br />
[https://www.cac.cornell.edu/services/myacct.aspx CAC Accounts and Credentials] </p><br />
<br />
=Instructions start here=<br />
* In the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:NewGlobus.png||500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:NewWeill.png|300px]][[Image:NewCULogin.png|266px]]<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:NewGlobusWelcome.png|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:NewGlobusComplete.png|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:NewCACWebsiteAllow.png|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:NewVerify.png|500px]]<br />
<br />
<br />
* Click on '''SSH Key''' <br />
** This will generate and register an SSH key for accessing the cayuga cluster.<br />
** <b>Warning:</b> If you have previously generated an SSH Key, it will be overwritten.<br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer <br />
::[[Image:Key.png|500px]]<br />
=Connect to the Cayuga Cluster=<br />
* connect to either the Weill or Cornell Ithaca VPN ('''required''')<br />
* ssh -i <keyname> <UserID>@cayuga-login1.cac.cornell.edu<br />
<br />
=More Information on SSH setup=<br />
https://www.cac.cornell.edu/wiki/index.php?title=Getting_Started_on_Private_Clusters#Passwordless_SSH<br />
<br />
=Technical Documentation=<br />
https://github.com/CornellCAC/Cayuga/wiki</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Cayuga_Cluster&diff=4103Cayuga Cluster2023-05-11T18:11:41Z<p>Jhs43: /* Home Directories */</p>
<hr />
<div>'''''Work In Progress'''''<br />
<br />
This is a private cluster, accessible to only users in <tt>cayuga</tt> projects.<br />
<br />
=Hardware=<br />
:* Head node: '''cayuga.cac.cornell.edu'''.<br />
:* Access modes: ssh<br />
:* OpenHPC 2.3 with Rocky Linux 8.7<br />
:* x compute nodes<br />
:* Interconnect is Infinband<br />
:* Submit HELP requests: [https://{{SERVERNAME}}/help help] OR by sending an email to [mailto:help@cac.cornell.edu CAC support] please include Cayuga in the subject area.<br />
<br />
=File Systems=<br />
==Home Directories==<br />
<br />
== ddn /athena==<br />
<br />
=Scheduler/Queues=<br />
:* The cluster scheduler is Slurm. All nodes are configured to be in the "normal" partition with no time limits. See [[ slurm | Slurm documentation page ]] for details. '''[[Slurm Quick Start | The Slurm Quick Start guide]]''' is a great place to start. See the [[ Slurm#Requesting_GPUs | Requesting GPUs ]] section for information on how to request GPUs on compute nodes for your jobs.<br />
:* Remember, hyperthreading is enabled on the cluster, so Slurm considers each physical core to consist of two logical CPUs.<br />
:* Partitions (queues):<br />
::{| border="1" cellspacing="0" cellpadding="10"<br />
! Name<br />
! Description<br />
! Time Limit<br />
|-<br />
| normal<br />
| all nodes, each node xxx GPUs<br />
| no limit<br />
|}<br />
<br />
=Software=<br />
<br />
==Work with Environment Modules==<br />
<br />
=Setting up your Access=<br />
User has received Welcome e-mail.</p><br />
User will only need to do this one time. </p><br />
* Go to [https://www.cac.cornell.edu/services/myacct.aspx CAC My Account page] : <br />
* look to the <b>Credentials for CAC Services</b> section Click on '''Globus Sub''' <br />
:: [[Image:MyAccount.jpg|500px]]<br />
* Select ''Weill Cornell Medical College'' OR ''Cornell University'' <br />
:: [[Image:GlobusSelect.jpg|500px]]<br />
:and Continue<br />
<br />
* Login using your CWID or NetID - (based on previous choice this takes you to organization login) <br />
::<br />
::[[Image:Weill.jpg|250px]][[Image:CULogin.jpg|250px]]<br />
<br />
::: If you have never setup globus with your account, you will get three prompts/screens below<br />
::: <b>Continue </b><br />
:::[[Image:GlobusWelcome.jpg|500px]]<br />
::: <b>Check the box and Continue </b><br />
:::[[Image:GlobusComplete.jpg|500px]]<br />
::: <b>Allow </b><br />
:::[[Image:CACWebsiteAllow.jpg|500px]]<br />
<br />
<br />
* Push the Weill ID or Cornell NetID button (based on previous choice) <br />
::[[Image:Verify.jpg|500px]]<br />
* <br />
<br />
* Click on '''SSH Key''' <br />
::[[Image:SSHKeyClick.jpg|500px]]<br />
<br />
* Copy the SSH Private Key to your computer *<br />
::[[Image:SSHKey.jpg|500px]]<br />
* ssh -i <keyname> <UserID>@cayuga.cac.cornell.edu</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3846Pool Hardware2022-02-03T16:25:20Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON for all nodes with the exception of "astra" partition nodes (c0[055,063-100])<br />
<br />
[https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster#Partitions Detailed Partition Information]<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c001[01,03]<br />
| align="center" | 727GB<br />
| align="left" | Intel S2600WFT<br />
Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c00102<br />
| align="center" | 1.5T<br />
| align="left" | Intel S2600WFT<br />
Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~6.5TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3845Pool Hardware2022-02-03T16:23:50Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON for all nodes with the exception of "astra" partition nodes (c0[055,063-100])<br />
<br />
[https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster#Partitions Detailed Partition Information]<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c001[01,03]<br />
| align="center" | 727GB<br />
| align="left" | Intel S2600WFT<br />
| Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c00102<br />
| align="center" | 1.5T<br />
| align="left" | Intel S2600WFT<br />
| Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~6.5TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3844Pool Hardware2022-02-03T16:22:55Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON for all nodes with the exception of "astra" partition nodes (c0[055,063-100])<br />
<br />
[https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster#Partitions Detailed Partition Information]<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c001[01,03]<br />
| align="center" | 727GB<br />
| align="left" | Intel S2600WFT<br />
| Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c00102<br />
| align="center" | 1.5T<br />
| align="left" | Intel S2600WFT<br />
| Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~6.5TB<br />
|}<br />
<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=CAPECRYSTAL_Cluster&diff=3585CAPECRYSTAL Cluster2021-08-31T03:33:43Z<p>Jhs43: /* Queues/Partitions */</p>
<hr />
<div>=== General Information ===<br />
<br />
:* capecrystal is a private cluster with restricted access to the following groups: jd732_0001<br />
:* Head node: '''capecrystal.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://openhpc.community/ OpenHPC] deployment running Centos 7.6<br />
:** Cluster scheduler: slurm 18.08.8<br />
:* 5 GPU compute nodes [[#Hardware|c000[1-5]]] <br />
:* Current Cluster Status: [http://capecrystal.cac.cornell.edu/ganglia/ Ganglia].<br />
:* data on the capecrystal cluster is <tt>'''NOT'''</tt> backed up<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
=== Hardware ===<br />
:* There is a 893GB local /scratch disk on the head node only.<br />
:* capecrystal.cac.cornell.edu: PowerEdge R440; memory: 92GB, swap: 15GB <br />
:* capecrystal compute nodes c000[1-5]:<br />
:** PowerEdge C4140; memory: 187GB, swap: 5GB<br />
:** Each node contains Qty 4 GPUS: 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)<br />
<br />
=== Networking ===<br />
:* All nodes have a 10GB ethernet connection for eth0 on a private net served out from the capecrystal head node.<br />
:* All nodes include: Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] (ibstat will show you the status on the compute)<br />
<br />
=== How To Login ===<br />
<br />
:* To get started, login to the head node <tt>capecrystal.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
<br />
=== Running Jobs / Slurm Scheduler ===<br />
<br />
==== Queues/Partitions ====<br />
("Partition" is the term used by slurm for "Queues")<br />
<br />
:* '''hyperthreading is turned on for ALL nodes''' - <br />
:* '''slurm considers each node to have the following:<br />
:** CPUs=56 Boards=1 SocketsPerBoard=2 CoresPerSocket=14 ThreadsPerCore=2 RealMemory=191840<br />
:** all partitions have a default time of 1 hour <br />
''Partitions on the capecrystal cluster:'''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''cpushort''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 2 hours <br />
| jd732_0001<br />
|-<br />
| '''cpu''' (default)<br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 12 hours <br />
| jd732_0001<br />
|-<br />
| '''cpulong''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| jd732_0001<br />
|-<br />
| '''gpushort''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 2 hours <br />
| jd732_0001<br />
|-<br />
| '''gpu''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 12 hours <br />
| jd732_0001<br />
|-<br />
| '''gpulong''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| jd732_0001<br />
|}<br />
<br />
ALL partitions have a '''QOS (Quality of Service)''' setup to allow one to submit a lower or higher priority on the Capecrystal cluster.<br />
The QOS setups are for: '''low, normal, high and urgent. (default: normal)'''<br />
Therefore if you urgently need your job to bump up higher in the wait queue, submit with:<br />
sbatch --qos=urgent job_name<br />
<br />
==== Slurm Scheduler HELP ====<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' is the first place to go to learn about how to use Slurm on CAC clusters.<br />
Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
<br />
==== Examples & Tips ====<br />
:* A shared directory has been setup for users to copy singularity containers into in effort to alleviate all users from downloading the same container. You will still need to copy from the shared folder into your $HOME for singularity to run successfully. You can put any container into: <br />
/opt/ohpc/pub/containers<br />
<br />
:* All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored, you must place 2 "##" at the beginning of your line.<br />
<br />
===== Example Singularity HOOMD-blue GPU batch job =====<br />
<br />
Below, replace 'username' with your CAC user name and fetch or create the needed files on the cluster head node in your home directory.<br />
<br />
first_test.py3 sample script can be found in /opt/ohpc/pub/containers/hoomd (OR it can be created from the simple example https://hoomd-blue.readthedocs.io/en/stable/index.html)<br />
<br />
software.simg Singularity image can be copied from /opt/ohpc/pub/containers/hoomd to your $HOME (It can also be fetched with singularity pull https://hoomd-blue.readthedocs.io/en/stable/installation.html)<br />
<br />
<br />
usage : sbatch singularity_hoomd_ex.run<br />
<br />
singularity_hoomd_ex.run example batch script (Remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_hoomd_ex"<br />
#SBATCH --output="singularity_hoomd_ex.%j.out"<br />
#SBATCH --error="singularity_hoomd_ex.%j.err"<br />
#SBATCH --nodes=1<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:10:00<br />
set -x<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
SCRIPT=/home/fs01/username/hoomd/first_test.py3<br />
<br />
# placeholder for debugging<br />
module load singularity<br />
which singularity<br />
set +x<br />
echo "next command to run: singularity exec --nv ${CONTAINER} python3 ${SCRIPT}"<br />
singularity exec --nv ${CONTAINER} python3 ${SCRIPT}<br />
<br />
<br />
## debugging commands can be inserted above<br />
#echo hostname<br />
#hostname<br />
#echo "lspci | grep -iE ' VGA |NVI'"<br />
#lspci | grep -iE ' VGA |NVI'<br />
#echo nvclock<br />
#nvclock<br />
#echo "which nvcc"<br />
#which nvcc<br />
#echo "nvcc --version"<br />
#nvcc --version<br />
#echo PATH<br />
#echo "$PATH"<br />
#echo LD_LIBRARY_PATH<br />
#echo "$LD_LIBRARY_PATH"<br />
<br />
</pre><br />
<br />
===== Example Singularity mpi test batch job =====<br />
<br />
usage : sbatch singularity_mpi_ex.run<br />
<br />
Example singularity_mpi_ex.run script (remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_mpi_ex"<br />
#SBATCH --output="singularity_mpi_ex.%j.out"<br />
#SBATCH --error="singularity_mpi_ex.%j.err"<br />
#SBATCH --nodes=5<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:1:00<br />
<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
<br />
module load singularity<br />
<br />
echo "script job head node $(hostname)"<br />
<br />
# silence mpi component warnings<br />
export MPI_MCA_mca_base_component_show_load_errors=0<br />
export PMIX_MCA_mca_base_component_show_load_errors=0<br />
<br />
echo "next command to run: mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname"<br />
mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname<br />
<br />
# clean up exit code , consult .out and .err files in working directory upon job completion for mpi debugging<br />
exit 0<br />
<br />
</pre><br />
<br />
===== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! =====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p normal<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testnormal-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testnormal-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== Modules ===<br />
The 'lmod module' system is implemented for your use with listing and loading modules that will put you in the software environment needed.<br />
(For more information, type: '''module help''')<br />
<br />
To list current (default) modules loaded upon logging in:<br />
module list<br />
<br />
To list the available software and the software environment you can put yourself in according to what compiler is loaded in the above listing, type:<br />
module avail <br />
(to get a more complete listing, type: module spider)<br />
The software that is listed with "(L)" references what you have loaded. <br />
<br />
EXAMPLE:<br />
To be sure you are using the environment setup for cmake3, you would type:<br />
<pre><br />
module load cmake3<br />
module list (you will see cmake3 is loaded (L))<br />
* when done, either logout and log back in or type:<br />
module unload cmake3<br />
</pre><br />
To swap to a different set of modules per compiler, you can swap out your currently loaded compiler. <br />
<br />
EXAMPLE:<br />
<pre><br />
module swap gnu8 gnu7<br />
</pre><br />
You will then see a different set of available modules upon typing: module avail<br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path/to/personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type echo $MODULEPATH to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
:* It is usually possible to install software in your home directory.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre><br />
<br />
=== Jupyter Notebook Server ===<br />
==== Installation ====<br />
Follow these steps to install and run your own Jupyter Notebook server on capecrystal.<br />
<br />
# Create a jupyter-notebook python virtual environment:<br />
:# Load <code>python/3.8.3</code> module: <code>module load python/3.8.3</code><br />
:# Create a jupyter-notebook python virtual environment: <code>python -m venv jupyter-notebook</code><br />
# Install jupyter notebook in the jupter-notebook virtual environment:<br />
:# Activate jupyter-notebook virtual environment: <code> source jupyter-notebook/bin/activate</code><br />
:# Install jupyter notebook: <code>pip install jupyter notebook</code><br />
<br />
==== Start the Jupyter Notebook Server on Compute Node ====<br />
Use these steps to start the Jupyter Notebook server on a compute node:<br />
<br />
* Obtain shell access to a compute node and identify the compute node:<br />
<pre><br />
-bash-4.2$ srun --pty /bin/bash<br />
bash-4.2$ hostname<br />
c0002<br />
</pre><br />
<br />
* Start jupyter notebook on the compute node:<br />
<pre><br />
module load python/3.8.3<br />
source jupyter-notebook/bin/activate<br />
jupyter notebook<br />
</pre><br />
<br />
==== Access the Notebook ====<br />
[[File:Jupyter notebook port forwarding.jpg|center]]<br />
<br />
* In a new terminal on your client, ssh to capecrystal head node. Establish an ssh tunnel between the head node's forwarding port (10000 in this example) and port 8888 on the compute node (c0002 in this example). ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 10000:localhost:8888 c0002<br />
</pre><br />
<br />
* In a new terminal on your client, establish an ssh tunnel between the client's source port (20000 in this example) and the forwarding port on the capecrytal head node. ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 20000:localhost:10000 <user>@capecrystal.cac.cornell.edu<br />
</pre><br />
<br />
* On your client, point your web browser to:<br />
<pre><br />
http://localhost:20000<br />
</pre></div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Private_Clusters&diff=3553Private Clusters2021-06-21T13:01:35Z<p>Jhs43: /* Restricted Use - Privately owned computer resources (in alphabetical order) */</p>
<hr />
<div>===Restricted Use - Privately owned computer resources (in alphabetical order)===<br />
<br />
<br />
* [[ACLAB]] <br />
* [[AIDA Cluster]] <br />
* ASTRA Cluster - [[pool Cluster]]<br />
* [[ATLAS2 Cluster]] <br />
* [[CAPECRYSTAL Cluster]]<br />
* [[Gu Lab]]<br />
* [[HeritageWatch]]<br />
* [[MARVIN Cluster]] <br />
* [[Marx1 Cluster]]<br />
* [[pool Cluster]] - Includes VEGA and ASTRA<br />
* [[TARDIS Cluster]]<br />
** [[TARDIS3 Cluster]] - ongoing OpenHPC 2/CentOS 8 reinstallation of [[TARDIS Cluster]]<br />
* [[THECUBE Cluster]] <br />
* VEGA Cluster - moved to [[pool Cluster]]<br />
* [[vessel Cluster]]<br />
* [[WALLE Cluster]]<br />
* [[WALLER Cluster]]<br />
<br />
===General Documentation===<br />
*[[Rules for Creating Passwords]]<br />
*[[Linux Tutorial]]<br />
*[[Connect to Linux]]<br />
*[[Linux Tips and Tricks]]<br />
*[[FAQ|Troubleshooting]]<br />
*[[Slurm]]<br />
*[[Modules (Lmod)]]</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3359Pool Hardware2020-12-11T16:01:48Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON for all nodes with the exception of "astra" partition nodes (c0[055,063-100])<br />
<br />
[https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster#Partitions Detailed Partition Information]<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster&diff=3335Pool Cluster2020-12-04T20:20:30Z<p>Jhs43: /* Partitions */</p>
<hr />
<div>== General Information ==<br />
:* POOL is a joint HPC cluster between two departments: '''Chemical and Biomolecular Engineering''' and '''Chemistry'''<br />
:* PIs are Fernando Escobedo (fe13), Don Koch (dlk15), Yong Joo (ylj2), Robert DiStasio (rad332), and Nandini Ananth (na346)<br />
:* Cluster access is restricted to these CAC groups: fe13_0001, dlk15_0001, ylj2_0001, rad332_0001, na346_0001<br />
:* Head node: '''pool.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://github.com/openhpc/ohpc/wiki OpenHPC] deployment running Centos 7<br />
:** Scheduler: slurm 18<br />
: <br />
:* Current Cluster Status: [http://pool.cac.cornell.edu/ganglia/ Ganglia].<br />
:* Home directories are provided for each group from 3 file servers.<br />
:** icsefs01 - serves fe13_0001<br />
:** icsefs02 - serves dlk15_0001 and ylj2_0001<br />
:** chemfs01 - serves rad332_0001 and na346_0001<br />
:* Data is generally <tt>'''NOT'''</tt> backed up (check with your PI for details).<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
== How To Login ==<br />
:* To get started, login to the head node <tt>pool.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked.<br />
<br />
== Hardware and Networking ==<br />
:* The head node has 1.8TB local /scratch disk.<br />
:* Head node and 3 file servers have 10Gb connections.<br />
:* All compute nodes currently have a 1GB connections.<br />
:* [[Pool Hardware]] technical information.<br />
<br />
== Partitions ==<br />
"Partition" is the term used by slurm for designated groups of compute nodes<br />
<br />
:* ''hyperthreading is turned on in all nodes '''EXCEPT astra''''' <br />
:** where hyperthreading is turned on Slurm considers each core to consist of 2 logical CPUs<br />
:** for astra nodes Slurm considers each core to consist of 1 logical CPU<br />
:* all partitions have a default time of 1 hour and are set to OverSubscribe (per core scheduling vs per node)<br />
''Partitions on the pool cluster:''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''common''' (default)<br />
| align="center" | 18<br />
| align="center" | c00[17,19,20,22,29-38,50-53]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| '''All Groups'''<br />
|-<br />
| '''plato''' <br />
| align="center" | 1<br />
| align="center" | c0009<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| limited access per fe13<br />
|-<br />
| '''fe13''' <br />
| align="center" | 23<br />
| align="center" | c00[01-08,18,21,23,24,40-49,54]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| fe13_0001<br />
|-<br />
| '''dlk15''' <br />
| align="center" | 7<br />
| align="center" | c00[10-14,25,39]<br />
| align="center" | walltime limit: 168 hours (i.e. 31 days)<br />
| dlk15_0001<br />
|-<br />
| '''ylj2''' <br />
| align="center" | 5<br />
| align="center" | c00[15-16,26-28]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| ylj2_0001<br />
|-<br />
| '''vega''' <br />
| align="center" | 7<br />
| align="center" | c00[56-62]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| rad332_0001<br />
|-<br />
| '''astra''' <br />
| align="center" | 39<br />
| align="center" | c0[055,063-100]<br />
| align="center" | walltime limit: 720 hours (i.e. 30 days)<br />
| na346_0001<br />
|-<br />
|}<br />
<br />
== Running Jobs / Slurm Scheduler ==<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' explains what Slurm is and how to use it to run your jobs. Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked. <br />
<pre><br />
A few slurm commands to initially get familiar with:<br />
<br />
sinfo -l<br />
scontrol show nodes<br />
scontrol show partition<br />
<br />
Submit a job: sbatch testjob.sh<br />
Interactive Job: srun -p common --pty /bin/bash<br />
<br />
scontrol show job [job id]<br />
scancel [job id]<br />
<br />
squeue -u userid<br />
</pre><br />
=== Slurm Examples & Tips ===<br />
NOTE: All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored (i.e. a comment), you must place 2 "##" at the beginning of your line.<br />
==== Example batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with one task (default) in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## sets the tasks per core (default=2 for hyperthreading: cores are oversubscribed)<br />
## set to 1 if one task by itself is enough to keep a core busy<br />
#SBATCH --ntasks-per-core=1 <br />
<br />
## request 4GB per CPU (may limit # of tasks, depending on total memory)<br />
#SBATCH --mem-per-cpu=4GB<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcomon%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print the Slurm job ID<br />
echo "SLURM_JOB_ID=$SLURM_JOB_ID"<br />
<br />
echo "hello world `hostname`"<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
Submit/Run your job:<br />
<pre><br />
sbatch example.sh<br />
</pre><br />
<br />
View your job:<br />
<pre><br />
scontrol show job <job_id><br />
</pre><br />
<br />
==== Example MPI batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with 60 tasks in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## the number of slots (CPUs) to reserve<br />
#SBATCH -n 60<br />
<br />
## the number of nodes to use (min and max can be set separately)<br />
#SBATCH -N 3<br />
<br />
## typically an MPI job needs exclusive access to nodes for good load balancing<br />
#SBATCH --exclusive<br />
<br />
## don't worry about hyperthreading, Slurm should distribute tasks evenly<br />
##SBATCH --ntasks-per-core=1 <br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print Slurm job properties<br />
echo "SLURM_JOB_ID = $SLURM_JOB_ID"<br />
echo "SLURM_NTASKS = $SLURM_NTASKS"<br />
echo "SLURM_JOB_NUM_NODES = $SLURM_JOB_NUM_NODES"<br />
echo "SLURM_JOB_NODELIST = $SLURM_JOB_NODELIST"<br />
echo "SLURM_JOB_CPUS_PER_NODE = $SLURM_JOB_CPUS_PER_NODE"<br />
<br />
mpiexec -n $SLURM_NTASKS ./hello_mpi<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
==== To include or exclude specific nodes in your batch script ====<br />
<br />
To run on a specific node only, add the following line to your batch script:<br />
<pre><br />
#SBATCH -w, --nodelist=c0009<br />
</pre><br />
<br />
To include one or more nodes that you specifically want, add the following line to your batch script:<br />
<pre><br />
#SBATCH --nodelist=<node_names_you_want_to_include><br />
<br />
## e.g., to include c0006:<br />
#SBATCH --nodelist=c0006<br />
<br />
## to include c0006 and c0007 (also illustrates shorter syntax):<br />
#SBATCH -w c000[6,7]<br />
</pre><br />
<br />
To exclude one or more nodes, add the following line to your batch script:<br />
<br />
<pre><br />
#SBATCH -exclude=<node_names_you_want_to_exclude><br />
<br />
## e.g., to avoid c0006 through c0008, and c0013:<br />
#SBATCH -exclude=c00[06-08,13]<br />
<br />
## to exclude c0006 (also illustrates shorter syntax):<br />
#SBATCH -x c0006<br />
</pre><br />
<br />
==== Environment variables defined for tasks that are started with srun ====<br />
<br />
If you submit a batch job in which you run the following script with "srun -n $SLURM_NTASKS", you will see how the various environment variables are defined.<br />
<br />
<pre><br />
#!/bin/bash<br />
echo "Hello from `hostname`," \<br />
"$SLURM_CPUS_ON_NODE CPUs are allocated here," \<br />
"I am rank $SLURM_PROCID on node $SLURM_NODEID," \<br />
"my task ID on this node is $SLURM_LOCALID"<br />
</pre><br />
<br />
These variables are not defined in the same useful way in the environments of tasks that are started with mpiexec or mpirun.<br />
<br />
==== Use $HOME within your script rather than the full path to your home directory ====<br />
In order to access files in your home directory, you should use $HOME rather than the full path . <br />
To test, you could add to your batch script:<br />
<pre><br />
echo "my home dir is $HOME"<br />
</pre><br />
Then view the output file you set in your batch script to get the result.<br />
<br />
<br />
==== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! ====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== LMOD Module System ===<br />
The 'lmod module' system is implemented to list and load software. Loading software via the module command will put you in the software environment requested. <br />
:For more information, type: '''module help'''<br />
:To list the available software type: '''module avail''' <br />
:To get a more complete listing, type: '''module spider'''<br />
:Software listed with "(L)" references it is already loaded. <br />
EXAMPLE:<br />
To be sure you are using the environment setup for gromacs, you would type:<br />
<pre><br />
module load gromacs/2019.1<br />
module list (you will see gromacs (L) to show it is loaded)<br />
* when done with gromacs, either logout and log back in or type:<br />
module unload gromacs/2019.1<br />
</pre><br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path_to_personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type '''echo $MODULEPATH''' to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
===Intel Compilers and Tools===<br />
The following Intel compilers are installed on the pool cluster<br />
* Intel Compiler 2020 - default (2020.4.304)<br />
* Intel Compiler 2019 (2019.2.187)<br />
* Intel MPI (2019.9.304)<br />
<br />
By default, GNU 8 compilers and OpenMPI are selected, but you can use any combinations of compilers and MPI:<br />
<br />
To switch from GNU8/OpenMPI environment to the Intel environment:<br />
<pre><br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc<br />
<br />
<br />
-bash-4.2$ module swap gnu8 intel<br />
<br />
Due to MODULEPATH changes, the following have been reloaded:<br />
1) openmpi3/3.1.4<br />
<br />
-bash-4.2$ module swap openmpi3 impi<br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 3) ohpc 5) impi/2019.9.304<br />
2) prun/1.3 4) intel/2020.4.304<br />
</pre><br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
:* It is usually possible to install software in your home directory $HOME.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre></div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster&diff=3334Pool Cluster2020-12-03T21:34:40Z<p>Jhs43: /* Partitions */</p>
<hr />
<div>== General Information ==<br />
:* POOL is a joint HPC cluster between two departments: '''Chemical and Biomolecular Engineering''' and '''Chemistry'''<br />
:* PIs are Fernando Escobedo (fe13), Don Koch (dlk15), Yong Joo (ylj2), Robert DiStasio (rad332), and Nandini Ananth (na346)<br />
:* Cluster access is restricted to these CAC groups: fe13_0001, dlk15_0001, ylj2_0001, rad332_0001, na346_0001<br />
:* Head node: '''pool.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://github.com/openhpc/ohpc/wiki OpenHPC] deployment running Centos 7<br />
:** Scheduler: slurm 18<br />
: <br />
:* Current Cluster Status: [http://pool.cac.cornell.edu/ganglia/ Ganglia].<br />
:* Home directories are provided for each group from 3 file servers.<br />
:** icsefs01 - serves fe13_0001<br />
:** icsefs02 - serves dlk15_0001 and ylj2_0001<br />
:** chemfs01 - serves rad332_0001 and na346_0001<br />
:* Data is generally <tt>'''NOT'''</tt> backed up (check with your PI for details).<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
== How To Login ==<br />
:* To get started, login to the head node <tt>pool.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked.<br />
<br />
== Hardware and Networking ==<br />
:* The head node has 1.8TB local /scratch disk.<br />
:* Head node and 3 file servers have 10Gb connections.<br />
:* All compute nodes currently have a 1GB connections.<br />
:* [[Pool Hardware]] technical information.<br />
<br />
== Partitions ==<br />
"Partition" is the term used by slurm for designated groups of compute nodes<br />
<br />
:* ''hyperthreading is turned on in all nodes '''EXCEPT astra''''' <br />
:** where hyperthreading is turned on Slurm considers each core to consist of 2 logical CPUs<br />
:** for astra nodes Slurm considers each core to consist of 1 logical CPU<br />
:* all partitions have a default time of 1 hour and are set to OverSubscribe (per core scheduling vs per node)<br />
''Partitions on the pool cluster:''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''common''' (default)<br />
| align="center" | 18<br />
| align="center" | c00[17,19,20,22,29-38,50-53]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| '''All Groups'''<br />
|-<br />
| '''plato''' <br />
| align="center" | 1<br />
| align="center" | c0009<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| limited access per fe13<br />
|-<br />
| '''fe13''' <br />
| align="center" | 23<br />
| align="center" | c00[01-08,18,21,23,24,40-49,54]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| fe13_0001<br />
|-<br />
| '''dlk15''' <br />
| align="center" | 7<br />
| align="center" | c00[10-14,25,39]<br />
| align="center" | walltime limit: 168 hours (i.e. 31 days)<br />
| dlk15_0001<br />
|-<br />
| '''ylj2''' <br />
| align="center" | 5<br />
| align="center" | c00[15-16,26-28]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| ylj2_0001<br />
|-<br />
| '''vega''' <br />
| align="center" | 7<br />
| align="center" | c00[56-62]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| rad332_0001<br />
|-<br />
| '''astra''' <br />
| align="center" | 39<br />
| align="center" | c0[055,063-100]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| na346_0001<br />
|-<br />
|}<br />
<br />
== Running Jobs / Slurm Scheduler ==<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' explains what Slurm is and how to use it to run your jobs. Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked. <br />
<pre><br />
A few slurm commands to initially get familiar with:<br />
<br />
sinfo -l<br />
scontrol show nodes<br />
scontrol show partition<br />
<br />
Submit a job: sbatch testjob.sh<br />
Interactive Job: srun -p common --pty /bin/bash<br />
<br />
scontrol show job [job id]<br />
scancel [job id]<br />
<br />
squeue -u userid<br />
</pre><br />
=== Slurm Examples & Tips ===<br />
NOTE: All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored (i.e. a comment), you must place 2 "##" at the beginning of your line.<br />
==== Example batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with one task (default) in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## sets the tasks per core (default=2 for hyperthreading: cores are oversubscribed)<br />
## set to 1 if one task by itself is enough to keep a core busy<br />
#SBATCH --ntasks-per-core=1 <br />
<br />
## request 4GB per CPU (may limit # of tasks, depending on total memory)<br />
#SBATCH --mem-per-cpu=4GB<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcomon%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print the Slurm job ID<br />
echo "SLURM_JOB_ID=$SLURM_JOB_ID"<br />
<br />
echo "hello world `hostname`"<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
Submit/Run your job:<br />
<pre><br />
sbatch example.sh<br />
</pre><br />
<br />
View your job:<br />
<pre><br />
scontrol show job <job_id><br />
</pre><br />
<br />
==== Example MPI batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with 60 tasks in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## the number of slots (CPUs) to reserve<br />
#SBATCH -n 60<br />
<br />
## the number of nodes to use (min and max can be set separately)<br />
#SBATCH -N 3<br />
<br />
## typically an MPI job needs exclusive access to nodes for good load balancing<br />
#SBATCH --exclusive<br />
<br />
## don't worry about hyperthreading, Slurm should distribute tasks evenly<br />
##SBATCH --ntasks-per-core=1 <br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print Slurm job properties<br />
echo "SLURM_JOB_ID = $SLURM_JOB_ID"<br />
echo "SLURM_NTASKS = $SLURM_NTASKS"<br />
echo "SLURM_JOB_NUM_NODES = $SLURM_JOB_NUM_NODES"<br />
echo "SLURM_JOB_NODELIST = $SLURM_JOB_NODELIST"<br />
echo "SLURM_JOB_CPUS_PER_NODE = $SLURM_JOB_CPUS_PER_NODE"<br />
<br />
mpiexec -n $SLURM_NTASKS ./hello_mpi<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
==== To include or exclude specific nodes in your batch script ====<br />
<br />
To run on a specific node only, add the following line to your batch script:<br />
<pre><br />
#SBATCH -w, --nodelist=c0009<br />
</pre><br />
<br />
To include one or more nodes that you specifically want, add the following line to your batch script:<br />
<pre><br />
#SBATCH --nodelist=<node_names_you_want_to_include><br />
<br />
## e.g., to include c0006:<br />
#SBATCH --nodelist=c0006<br />
<br />
## to include c0006 and c0007 (also illustrates shorter syntax):<br />
#SBATCH -w c000[6,7]<br />
</pre><br />
<br />
To exclude one or more nodes, add the following line to your batch script:<br />
<br />
<pre><br />
#SBATCH -exclude=<node_names_you_want_to_exclude><br />
<br />
## e.g., to avoid c0006 through c0008, and c0013:<br />
#SBATCH -exclude=c00[06-08,13]<br />
<br />
## to exclude c0006 (also illustrates shorter syntax):<br />
#SBATCH -x c0006<br />
</pre><br />
<br />
==== Environment variables defined for tasks that are started with srun ====<br />
<br />
If you submit a batch job in which you run the following script with "srun -n $SLURM_NTASKS", you will see how the various environment variables are defined.<br />
<br />
<pre><br />
#!/bin/bash<br />
echo "Hello from `hostname`," \<br />
"$SLURM_CPUS_ON_NODE CPUs are allocated here," \<br />
"I am rank $SLURM_PROCID on node $SLURM_NODEID," \<br />
"my task ID on this node is $SLURM_LOCALID"<br />
</pre><br />
<br />
These variables are not defined in the same useful way in the environments of tasks that are started with mpiexec or mpirun.<br />
<br />
==== Use $HOME within your script rather than the full path to your home directory ====<br />
In order to access files in your home directory, you should use $HOME rather than the full path . <br />
To test, you could add to your batch script:<br />
<pre><br />
echo "my home dir is $HOME"<br />
</pre><br />
Then view the output file you set in your batch script to get the result.<br />
<br />
<br />
==== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! ====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== LMOD Module System ===<br />
The 'lmod module' system is implemented to list and load software. Loading software via the module command will put you in the software environment requested. <br />
:For more information, type: '''module help'''<br />
:To list the available software type: '''module avail''' <br />
:To get a more complete listing, type: '''module spider'''<br />
:Software listed with "(L)" references it is already loaded. <br />
EXAMPLE:<br />
To be sure you are using the environment setup for gromacs, you would type:<br />
<pre><br />
module load gromacs/2019.1<br />
module list (you will see gromacs (L) to show it is loaded)<br />
* when done with gromacs, either logout and log back in or type:<br />
module unload gromacs/2019.1<br />
</pre><br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path_to_personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type '''echo $MODULEPATH''' to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
===Intel Compilers and Tools===<br />
The following Intel compilers are installed on the pool cluster<br />
* Intel Compiler 2020 - default (2020.4.304)<br />
* Intel Compiler 2019 (2019.2.187)<br />
* Intel MPI (2019.9.304)<br />
<br />
By default, GNU 8 compilers and OpenMPI are selected, but you can use any combinations of compilers and MPI:<br />
<br />
To switch from GNU8/OpenMPI environment to the Intel environment:<br />
<pre><br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc<br />
<br />
<br />
-bash-4.2$ module swap gnu8 intel<br />
<br />
Due to MODULEPATH changes, the following have been reloaded:<br />
1) openmpi3/3.1.4<br />
<br />
-bash-4.2$ module swap openmpi3 impi<br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 3) ohpc 5) impi/2019.9.304<br />
2) prun/1.3 4) intel/2020.4.304<br />
</pre><br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
:* It is usually possible to install software in your home directory $HOME.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre></div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster&diff=3333Pool Cluster2020-12-03T21:33:38Z<p>Jhs43: /* Partitions */</p>
<hr />
<div>== General Information ==<br />
:* POOL is a joint HPC cluster between two departments: '''Chemical and Biomolecular Engineering''' and '''Chemistry'''<br />
:* PIs are Fernando Escobedo (fe13), Don Koch (dlk15), Yong Joo (ylj2), Robert DiStasio (rad332), and Nandini Ananth (na346)<br />
:* Cluster access is restricted to these CAC groups: fe13_0001, dlk15_0001, ylj2_0001, rad332_0001, na346_0001<br />
:* Head node: '''pool.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://github.com/openhpc/ohpc/wiki OpenHPC] deployment running Centos 7<br />
:** Scheduler: slurm 18<br />
: <br />
:* Current Cluster Status: [http://pool.cac.cornell.edu/ganglia/ Ganglia].<br />
:* Home directories are provided for each group from 3 file servers.<br />
:** icsefs01 - serves fe13_0001<br />
:** icsefs02 - serves dlk15_0001 and ylj2_0001<br />
:** chemfs01 - serves rad332_0001 and na346_0001<br />
:* Data is generally <tt>'''NOT'''</tt> backed up (check with your PI for details).<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
== How To Login ==<br />
:* To get started, login to the head node <tt>pool.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked.<br />
<br />
== Hardware and Networking ==<br />
:* The head node has 1.8TB local /scratch disk.<br />
:* Head node and 3 file servers have 10Gb connections.<br />
:* All compute nodes currently have a 1GB connections.<br />
:* [[Pool Hardware]] technical information.<br />
<br />
== Partitions ==<br />
"Partition" is the term used by slurm for designated groups of compute nodes<br />
<br />
:* ''hyperthreading is turned on in all nodes '''EXCEPT astra''''' <br />
:** where hyperthreading is turned on Slurm considers each core to consist of 2 logical CPUs<br />
:** for astra nodes Slurm considers each core to consist of 1 logical CPU<br />
:* all partitions have a default time of 1 hour and are set to OverSubscribe (per core scheduling vs per node)<br />
''Partitions on the pool cluster:''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''common''' (default)<br />
| align="center" | 18<br />
| align="center" | c00[17,19,20,22,29-38,50-53]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| '''All Groups'''<br />
|-<br />
| '''plato''' <br />
| align="center" | 1<br />
| align="center" | c0009<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| limited access per fe13<br />
|-<br />
| '''fe13''' <br />
| align="center" | 23<br />
| align="center" | c00[01-08,18,21,23,24,40-49,54]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| fe13_0001<br />
|-<br />
| '''dlk15''' <br />
| align="center" | 7<br />
| align="center" | c00[10-14,25,39]<br />
| align="center" | walltime limit: 168 hours (i.e. 31 days)<br />
| dlk15_0001<br />
|-<br />
| '''ylj2''' <br />
| align="center" | 5<br />
| align="center" | c00[15-16,26-28]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| ylj2_0001<br />
|-<br />
| '''vega''' <br />
| align="center" | 7<br />
| align="center" | c00[56-62]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| rad332_0001<br />
|-<br />
| '''astra''' <br />
| align="center" | 40<br />
| align="center" | c0[055,063-100]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| na346_0001<br />
|-<br />
|}<br />
<br />
== Running Jobs / Slurm Scheduler ==<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' explains what Slurm is and how to use it to run your jobs. Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
:* NOTE: Users should not run codes on the head node. Users who do so will be notified and have privileges revoked. <br />
<pre><br />
A few slurm commands to initially get familiar with:<br />
<br />
sinfo -l<br />
scontrol show nodes<br />
scontrol show partition<br />
<br />
Submit a job: sbatch testjob.sh<br />
Interactive Job: srun -p common --pty /bin/bash<br />
<br />
scontrol show job [job id]<br />
scancel [job id]<br />
<br />
squeue -u userid<br />
</pre><br />
=== Slurm Examples & Tips ===<br />
NOTE: All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored (i.e. a comment), you must place 2 "##" at the beginning of your line.<br />
==== Example batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with one task (default) in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## sets the tasks per core (default=2 for hyperthreading: cores are oversubscribed)<br />
## set to 1 if one task by itself is enough to keep a core busy<br />
#SBATCH --ntasks-per-core=1 <br />
<br />
## request 4GB per CPU (may limit # of tasks, depending on total memory)<br />
#SBATCH --mem-per-cpu=4GB<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcomon%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print the Slurm job ID<br />
echo "SLURM_JOB_ID=$SLURM_JOB_ID"<br />
<br />
echo "hello world `hostname`"<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
Submit/Run your job:<br />
<pre><br />
sbatch example.sh<br />
</pre><br />
<br />
View your job:<br />
<pre><br />
scontrol show job <job_id><br />
</pre><br />
<br />
==== Example MPI batch job to run in the partition: common ====<br />
<br />
Example sbatch script to run a job with 60 tasks in the 'common' partition (i.e. queue):<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
<br />
## 10 min<br />
#SBATCH --time=00:10:00<br />
<br />
## the number of slots (CPUs) to reserve<br />
#SBATCH -n 60<br />
<br />
## the number of nodes to use (min and max can be set separately)<br />
#SBATCH -N 3<br />
<br />
## typically an MPI job needs exclusive access to nodes for good load balancing<br />
#SBATCH --exclusive<br />
<br />
## don't worry about hyperthreading, Slurm should distribute tasks evenly<br />
##SBATCH --ntasks-per-core=1 <br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting at `date` on `hostname`"<br />
<br />
# Print Slurm job properties<br />
echo "SLURM_JOB_ID = $SLURM_JOB_ID"<br />
echo "SLURM_NTASKS = $SLURM_NTASKS"<br />
echo "SLURM_JOB_NUM_NODES = $SLURM_JOB_NUM_NODES"<br />
echo "SLURM_JOB_NODELIST = $SLURM_JOB_NODELIST"<br />
echo "SLURM_JOB_CPUS_PER_NODE = $SLURM_JOB_CPUS_PER_NODE"<br />
<br />
mpiexec -n $SLURM_NTASKS ./hello_mpi<br />
<br />
echo "ended at `date` on `hostname`"<br />
exit 0<br />
<br />
</pre><br />
<br />
==== To include or exclude specific nodes in your batch script ====<br />
<br />
To run on a specific node only, add the following line to your batch script:<br />
<pre><br />
#SBATCH -w, --nodelist=c0009<br />
</pre><br />
<br />
To include one or more nodes that you specifically want, add the following line to your batch script:<br />
<pre><br />
#SBATCH --nodelist=<node_names_you_want_to_include><br />
<br />
## e.g., to include c0006:<br />
#SBATCH --nodelist=c0006<br />
<br />
## to include c0006 and c0007 (also illustrates shorter syntax):<br />
#SBATCH -w c000[6,7]<br />
</pre><br />
<br />
To exclude one or more nodes, add the following line to your batch script:<br />
<br />
<pre><br />
#SBATCH -exclude=<node_names_you_want_to_exclude><br />
<br />
## e.g., to avoid c0006 through c0008, and c0013:<br />
#SBATCH -exclude=c00[06-08,13]<br />
<br />
## to exclude c0006 (also illustrates shorter syntax):<br />
#SBATCH -x c0006<br />
</pre><br />
<br />
==== Environment variables defined for tasks that are started with srun ====<br />
<br />
If you submit a batch job in which you run the following script with "srun -n $SLURM_NTASKS", you will see how the various environment variables are defined.<br />
<br />
<pre><br />
#!/bin/bash<br />
echo "Hello from `hostname`," \<br />
"$SLURM_CPUS_ON_NODE CPUs are allocated here," \<br />
"I am rank $SLURM_PROCID on node $SLURM_NODEID," \<br />
"my task ID on this node is $SLURM_LOCALID"<br />
</pre><br />
<br />
These variables are not defined in the same useful way in the environments of tasks that are started with mpiexec or mpirun.<br />
<br />
==== Use $HOME within your script rather than the full path to your home directory ====<br />
In order to access files in your home directory, you should use $HOME rather than the full path . <br />
To test, you could add to your batch script:<br />
<pre><br />
echo "my home dir is $HOME"<br />
</pre><br />
Then view the output file you set in your batch script to get the result.<br />
<br />
<br />
==== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! ====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p common<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testcommon-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testcommon-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== LMOD Module System ===<br />
The 'lmod module' system is implemented to list and load software. Loading software via the module command will put you in the software environment requested. <br />
:For more information, type: '''module help'''<br />
:To list the available software type: '''module avail''' <br />
:To get a more complete listing, type: '''module spider'''<br />
:Software listed with "(L)" references it is already loaded. <br />
EXAMPLE:<br />
To be sure you are using the environment setup for gromacs, you would type:<br />
<pre><br />
module load gromacs/2019.1<br />
module list (you will see gromacs (L) to show it is loaded)<br />
* when done with gromacs, either logout and log back in or type:<br />
module unload gromacs/2019.1<br />
</pre><br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path_to_personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type '''echo $MODULEPATH''' to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
===Intel Compilers and Tools===<br />
The following Intel compilers are installed on the pool cluster<br />
* Intel Compiler 2020 - default (2020.4.304)<br />
* Intel Compiler 2019 (2019.2.187)<br />
* Intel MPI (2019.9.304)<br />
<br />
By default, GNU 8 compilers and OpenMPI are selected, but you can use any combinations of compilers and MPI:<br />
<br />
To switch from GNU8/OpenMPI environment to the Intel environment:<br />
<pre><br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 2) prun/1.3 3) gnu8/8.3.0 4) openmpi3/3.1.4 5) ohpc<br />
<br />
<br />
-bash-4.2$ module swap gnu8 intel<br />
<br />
Due to MODULEPATH changes, the following have been reloaded:<br />
1) openmpi3/3.1.4<br />
<br />
-bash-4.2$ module swap openmpi3 impi<br />
-bash-4.2$ module list<br />
<br />
Currently Loaded Modules:<br />
1) autotools 3) ohpc 5) impi/2019.9.304<br />
2) prun/1.3 4) intel/2020.4.304<br />
</pre><br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
:* It is usually possible to install software in your home directory $HOME.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre></div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3331Pool Hardware2020-12-03T18:38:16Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON for all nodes with the exception of "astra" partition nodes (c0[055,063-100])<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3330Pool Hardware2020-12-03T18:37:05Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0[055,063-100]<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3293Pool Hardware2020-11-03T20:00:29Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3292Pool Hardware2020-11-03T19:58:50Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRT/X9DRT<br />
Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz<br />
| align="center" | 12<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 1<br />
| align="left" | 800GB<br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3255Pool Hardware2020-10-07T18:21:36Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 96<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3254Pool Hardware2020-10-07T18:12:40Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 48<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 48<br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3251Pool Hardware2020-09-25T18:59:34Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-61]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 48<br />
| align="center" | 12<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
| c0062<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz<br />
| align="center" | 48<br />
| align="center" | 12<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3250Pool Hardware2020-09-25T18:56:55Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-62]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
Intel(R) Xeon(R) Platinum 8160 CPU @ 2.10GHz<br />
| align="center" | 48<br />
| align="center" | 12<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3249Pool Hardware2020-09-25T18:54:55Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-62]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
| align="center" | 48<br />
| align="center" | 12<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | ~3TB<br />
|-<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3248Pool Hardware2020-09-25T18:46:04Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-62]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
| align="center" | <br />
| align="center" | 24<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | <br />
|-<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=Pool_Hardware&diff=3247Pool Hardware2020-09-25T18:44:57Z<p>Jhs43: </p>
<hr />
<div><br />
hyperthreading ON<br />
<br />
<br />
:{| class="wikitable" border="1" cellpadding="5" style="width: auto"<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Memory per node<br />
! style="background:#e9e9e9;" | Model name<br />
! style="background:#e9e9e9;" | CPU count per node<br />
! style="background:#e9e9e9;" | Core(s) per socket<br />
! style="background:#e9e9e9;" | Sockets<br />
! style="background:#e9e9e9;" | Thread(s) per core<br />
! style= "background:#e9e9e9;" | /tmp size<br />
|-<br />
| c000[1-5]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[06-08]<br />
| align="center" | 124GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0009<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz <br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[10-16,18,26-28,48-49]<br />
| align="center" | 64GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[17,20]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5630 @ 2.53GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[19,22]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU E5640 @ 2.67GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c00[21,23,32-36, 50-53]<br />
| align="center" | 48 GB<br />
| align="left" | Supermicro X8DTL;<br />
Intel(R) Xeon(R) CPU X5650 @ 2.67GH<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0024<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v6/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0029<br />
| align="center" | 124 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0030<br />
| align="center" | 96 GB<br />
| align="left" | PowerEdge R430/03XKDV;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 850GB<br />
|-<br />
| c0031<br />
| align="center" | 48 GB<br />
| align="left" | Dell Inc. PowerEdge R420/0CN7CM;<br />
Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz<br />
| align="center" | 32<br />
| align="center" | 8<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c00[25,37-38,46-47]<br />
| align="center" | 124 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i; <br />
Intel Xeon(R) CPU E5-2660 v3 @ 2.60GHz <br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0039<br />
| align="center" | 96 GB<br />
| align="left" | Silicon Mechanics R308.v5/X10DRL-i;<br />
Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz<br />
| align="center" | 40<br />
| align="center" | 10<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 1.5TB<br />
|-<br />
| c0040<br />
| align="center" | 126 GB<br />
| align="left" | Silicon Mechanics Rackform_R353.v6/X10DGQ;<br />
Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz<br />
| align="center" | 56<br />
| align="center" | 14<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 180GB<br />
|-<br />
| c00[41-43]<br />
| align="center" | 24GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5667 @ 3.07GHz<br />
| align="center" | 16<br />
| align="center" | 4<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c00[44-45]<br />
| align="center" | 48GB<br />
| align="left" | Dell Inc. PowerEdge R410/01V648;<br />
Intel(R) Xeon(R) CPU X5675 @ 3.07GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 410GB<br />
|-<br />
| c0054<br />
| align="center" | 32GB<br />
| align="left" | Supermicro X9DRG-HF/X9DRG-HF;<br />
Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz<br />
| align="center" | 24<br />
| align="center" | 6<br />
| align="center" | 2<br />
| align="center" | 2<br />
| align="left" | 200GB<br />
|-<br />
| c0055-coming soon<br />
| align="center" | GB<br />
| align="left" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="center" | <br />
| align="left" | <br />
|-<br />
| c00[56-62]<br />
| align="center" | 754GB<br />
| align="left" | Intel S2600WFT/S2600WFT<br />
| align="center" | <br />
| align="center" | <br />
| align="center" | 2<br />
| align="center" | 24<br />
| align="left" | 2<br />
|-<br />
|}<br />
<br />
https://www.cac.cornell.edu/wiki/index.php?title=Pool_Cluster</div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=CAPECRYSTAL_Cluster&diff=3121CAPECRYSTAL Cluster2020-07-29T21:02:56Z<p>Jhs43: /* Queues/Partitions */</p>
<hr />
<div>=== General Information ===<br />
<br />
:* capecrystal is a private cluster with restricted access to the following groups: jd732_0001<br />
:* Head node: '''capecrystal.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://openhpc.community/ OpenHPC] deployment running Centos 7.6<br />
:** Cluster scheduler: slurm 18.08.8<br />
:* 5 GPU compute nodes [[#Hardware|c000[1-5]]] <br />
:* Current Cluster Status: [http://capecrystal.cac.cornell.edu/ganglia/ Ganglia].<br />
:* data on the capecrystal cluster is <tt>'''NOT'''</tt> backed up<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
=== Hardware ===<br />
:* There is a 893GB local /scratch disk on the head node only.<br />
:* capecrystal.cac.cornell.edu: PowerEdge R440; memory: 92GB, swap: 15GB <br />
:* capecrystal compute nodes c000[1-5]:<br />
:** PowerEdge C4140; memory: 187GB, swap: 5GB<br />
:** Each node contains Qty 4 GPUS: 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)<br />
<br />
=== Networking ===<br />
:* All nodes have a 10GB ethernet connection for eth0 on a private net served out from the capecrystal head node.<br />
:* All nodes include: Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] (ibstat will show you the status on the compute)<br />
<br />
=== How To Login ===<br />
<br />
:* To get started, login to the head node <tt>capecrystal.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
<br />
=== Running Jobs / Slurm Scheduler ===<br />
<br />
==== Queues/Partitions ====<br />
("Partition" is the term used by slurm for "Queues")<br />
<br />
:* '''hyperthreading is turned on for ALL nodes''' - <br />
:* '''slurm considers each node to have the following:<br />
:** CPUs=56 Boards=1 SocketsPerBoard=2 CoresPerSocket=14 ThreadsPerCore=2 RealMemory=191840<br />
:** all partitions have a default time of 1 hour <br />
''Partitions on the capecrystal cluster:'''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''normal''' (default)<br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| jd732_0001<br />
|-<br />
| '''cpu''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days); oversubscribe<br />
| jd732_0001<br />
|-<br />
| '''gpu''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days); oversubscribe<br />
| jd732_0001<br />
|}<br />
<br />
ALL partitions have a '''QOS (Quality of Service)''' setup to allow one to submit a lower or higher priority on the Capecrystal cluster.<br />
The QOS setups are for: '''low, normal, high and urgent. (default: normal)'''<br />
Therefore if you urgently need your job to bump up higher in the wait queue, submit with:<br />
sbatch --qos=urgent job_name<br />
<br />
==== Slurm Scheduler HELP ====<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' is the first place to go to learn about how to use Slurm on CAC clusters.<br />
Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
<br />
==== Examples & Tips ====<br />
:* A shared directory has been setup for users to copy singularity containers into in effort to alleviate all users from downloading the same container. You will still need to copy from the shared folder into your $HOME for singularity to run successfully. You can put any container into: <br />
/opt/ohpc/pub/containers<br />
<br />
:* All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored, you must place 2 "##" at the beginning of your line.<br />
<br />
===== Example Singularity HOOMD-blue GPU batch job =====<br />
<br />
Below, replace 'username' with your CAC user name and fetch or create the needed files on the cluster head node in your home directory.<br />
<br />
first_test.py3 sample script can be found in /opt/ohpc/pub/containers/hoomd (OR it can be created from the simple example https://hoomd-blue.readthedocs.io/en/stable/index.html)<br />
<br />
software.simg Singularity image can be copied from /opt/ohpc/pub/containers/hoomd to your $HOME (It can also be fetched with singularity pull https://hoomd-blue.readthedocs.io/en/stable/installation.html)<br />
<br />
<br />
usage : sbatch singularity_hoomd_ex.run<br />
<br />
singularity_hoomd_ex.run example batch script (Remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_hoomd_ex"<br />
#SBATCH --output="singularity_hoomd_ex.%j.out"<br />
#SBATCH --error="singularity_hoomd_ex.%j.err"<br />
#SBATCH --nodes=1<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:10:00<br />
set -x<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
SCRIPT=/home/fs01/username/hoomd/first_test.py3<br />
<br />
# placeholder for debugging<br />
module load singularity<br />
which singularity<br />
set +x<br />
echo "next command to run: singularity exec --nv ${CONTAINER} python3 ${SCRIPT}"<br />
singularity exec --nv ${CONTAINER} python3 ${SCRIPT}<br />
<br />
<br />
## debugging commands can be inserted above<br />
#echo hostname<br />
#hostname<br />
#echo "lspci | grep -iE ' VGA |NVI'"<br />
#lspci | grep -iE ' VGA |NVI'<br />
#echo nvclock<br />
#nvclock<br />
#echo "which nvcc"<br />
#which nvcc<br />
#echo "nvcc --version"<br />
#nvcc --version<br />
#echo PATH<br />
#echo "$PATH"<br />
#echo LD_LIBRARY_PATH<br />
#echo "$LD_LIBRARY_PATH"<br />
<br />
</pre><br />
<br />
===== Example Singularity mpi test batch job =====<br />
<br />
usage : sbatch singularity_mpi_ex.run<br />
<br />
Example singularity_mpi_ex.run script (remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_mpi_ex"<br />
#SBATCH --output="singularity_mpi_ex.%j.out"<br />
#SBATCH --error="singularity_mpi_ex.%j.err"<br />
#SBATCH --nodes=5<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:1:00<br />
<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
<br />
module load singularity<br />
<br />
echo "script job head node $(hostname)"<br />
<br />
# silence mpi component warnings<br />
export MPI_MCA_mca_base_component_show_load_errors=0<br />
export PMIX_MCA_mca_base_component_show_load_errors=0<br />
<br />
echo "next command to run: mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname"<br />
mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname<br />
<br />
# clean up exit code , consult .out and .err files in working directory upon job completion for mpi debugging<br />
exit 0<br />
<br />
</pre><br />
<br />
===== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! =====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p normal<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testnormal-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testnormal-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== Modules ===<br />
The 'lmod module' system is implemented for your use with listing and loading modules that will put you in the software environment needed.<br />
(For more information, type: '''module help''')<br />
<br />
To list current (default) modules loaded upon logging in:<br />
module list<br />
<br />
To list the available software and the software environment you can put yourself in according to what compiler is loaded in the above listing, type:<br />
module avail <br />
(to get a more complete listing, type: module spider)<br />
The software that is listed with "(L)" references what you have loaded. <br />
<br />
EXAMPLE:<br />
To be sure you are using the environment setup for cmake3, you would type:<br />
<pre><br />
module load cmake3<br />
module list (you will see cmake3 is loaded (L))<br />
* when done, either logout and log back in or type:<br />
module unload cmake3<br />
</pre><br />
To swap to a different set of modules per compiler, you can swap out your currently loaded compiler. <br />
<br />
EXAMPLE:<br />
<pre><br />
module swap gnu8 gnu7<br />
</pre><br />
You will then see a different set of available modules upon typing: module avail<br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path/to/personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type echo $MODULEPATH to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
:* It is usually possible to install software in your home directory.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre><br />
<br />
=== Jupyter Notebook Server ===<br />
==== Installation ====<br />
Follow these steps to install and run your own Jupyter Notebook server on capecrystal.<br />
<br />
# Create a jupyter-notebook python virtual environment:<br />
:# Load <code>python/3.8.3</code> module: <code>module load python/3.8.3</code><br />
:# Create a jupyter-notebook python virtual environment: <code>python -m venv jupyter-notebook</code><br />
# Install jupyter notebook in the jupter-notebook virtual environment:<br />
:# Activate jupyter-notebook virtual environment: <code> source jupyter-notebook/bin/activate</code><br />
:# Install jupyter notebook: <code>pip install jupyter notebook</code><br />
<br />
==== Start the Jupyter Notebook Server on Compute Node ====<br />
Use these steps to start the Jupyter Notebook server on a compute node:<br />
<br />
* Obtain shell access to a compute node and identify the compute node:<br />
<pre><br />
-bash-4.2$ srun --pty /bin/bash<br />
bash-4.2$ hostname<br />
c0002<br />
</pre><br />
<br />
* Start jupyter notebook on the compute node:<br />
<pre><br />
module load python/3.8.3<br />
source jupyter-notebook/bin/activate<br />
jupyter notebook<br />
</pre><br />
<br />
==== Access the Notebook ====<br />
[[File:Jupyter notebook port forwarding.jpg|center]]<br />
<br />
* In a new terminal on your client, ssh to capecrystal head node. Establish an ssh tunnel between the head node's forwarding port (10000 in this example) and port 8888 on the compute node (c0002 in this example). ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 10000:localhost:8888 c0002<br />
</pre><br />
<br />
* In a new terminal on your client, establish an ssh tunnel between the client's source port (20000 in this example) and the forwarding port on the capecrytal head node. ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 20000:localhost:10000 <user>@capecrystal.cac.cornell.edu<br />
</pre><br />
<br />
* On your client, point your web browser to:<br />
<pre><br />
http://localhost:20000<br />
</pre></div>Jhs43https://www.cac.cornell.edu/wiki/index.php?title=CAPECRYSTAL_Cluster&diff=3120CAPECRYSTAL Cluster2020-07-28T20:27:16Z<p>Jhs43: /* Queues/Partitions */</p>
<hr />
<div>=== General Information ===<br />
<br />
:* capecrystal is a private cluster with restricted access to the following groups: jd732_0001<br />
:* Head node: '''capecrystal.cac.cornell.edu''' ([[#How To Login|access via ssh]])<br />
:** [https://openhpc.community/ OpenHPC] deployment running Centos 7.6<br />
:** Cluster scheduler: slurm 18.08.8<br />
:* 5 GPU compute nodes [[#Hardware|c000[1-5]]] <br />
:* Current Cluster Status: [http://capecrystal.cac.cornell.edu/ganglia/ Ganglia].<br />
:* data on the capecrystal cluster is <tt>'''NOT'''</tt> backed up<br />
:* Please send any questions and report problems to: [mailto:cac-help@cornell.edu cac-help@cornell.edu]<br />
<br />
=== Hardware ===<br />
:* There is a 893GB local /scratch disk on the head node only.<br />
:* capecrystal.cac.cornell.edu: PowerEdge R440; memory: 92GB, swap: 15GB <br />
:* capecrystal compute nodes c000[1-5]:<br />
:** PowerEdge C4140; memory: 187GB, swap: 5GB<br />
:** Each node contains Qty 4 GPUS: 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)<br />
<br />
=== Networking ===<br />
:* All nodes have a 10GB ethernet connection for eth0 on a private net served out from the capecrystal head node.<br />
:* All nodes include: Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] (ibstat will show you the status on the compute)<br />
<br />
=== How To Login ===<br />
<br />
:* To get started, login to the head node <tt>capecrystal.cac.cornell.edu</tt> via ssh.<br />
:* You will be prompted for your [https://www.cac.cornell.edu/services/myacct.aspx CAC account] password<br />
:* If you are unfamiliar with Linux and ssh, we suggest reading the [[Linux Tutorial]] and looking into how to [[Connect to Linux]] before proceeding.<br />
<br />
=== Running Jobs / Slurm Scheduler ===<br />
<br />
==== Queues/Partitions ====<br />
("Partition" is the term used by slurm for "Queues")<br />
<br />
:* '''hyperthreading is turned on for ALL nodes''' - <br />
:* '''slurm considers each node to have the following:<br />
:** CPUs=56 Boards=1 SocketsPerBoard=2 CoresPerSocket=14 ThreadsPerCore=2 RealMemory=191840<br />
:** all partitions have a default time of 1 hour <br />
''Partitions on the capecrystal cluster:'''<br />
<br />
:{| class="wikitable" border="1" cellpadding="4" style="width: auto"<br />
! style="background:#e9e9e9;" | Queue/Partition<br />
! style="background:#e9e9e9;" | Number of nodes<br />
! style="background:#e9e9e9;" | Node Names<br />
! style="background:#e9e9e9;" | Limits<br />
! style="background:#e9e9e9;" | Group Access<br />
|-<br />
| '''normal''' (default)<br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days)<br />
| Domain Users<br />
|-<br />
| '''cpu''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days); oversubscribe<br />
| Domain Users<br />
|-<br />
| '''gpu''' <br />
| align="center" | 5<br />
| align="center" | c000[1-5]<br />
| align="center" | walltime limit: 168 hours (i.e. 7 days); oversubscribe<br />
| Domain Users<br />
|}<br />
<br />
==== Slurm Scheduler HELP ====<br />
<br />
'''[[Slurm | CAC's Slurm page]]''' is the first place to go to learn about how to use Slurm on CAC clusters.<br />
Please take the time to read this page, giving special attention to the parts that pertain to the types of jobs you want to run.<br />
<br />
==== Examples & Tips ====<br />
:* A shared directory has been setup for users to copy singularity containers into in effort to alleviate all users from downloading the same container. You will still need to copy from the shared folder into your $HOME for singularity to run successfully. You can put any container into: <br />
/opt/ohpc/pub/containers<br />
<br />
:* All lines begining with "#SBATCH" are a directive for the scheduler to read. If you want the line ignored, you must place 2 "##" at the beginning of your line.<br />
<br />
===== Example Singularity HOOMD-blue GPU batch job =====<br />
<br />
Below, replace 'username' with your CAC user name and fetch or create the needed files on the cluster head node in your home directory.<br />
<br />
first_test.py3 sample script can be found in /opt/ohpc/pub/containers/hoomd (OR it can be created from the simple example https://hoomd-blue.readthedocs.io/en/stable/index.html)<br />
<br />
software.simg Singularity image can be copied from /opt/ohpc/pub/containers/hoomd to your $HOME (It can also be fetched with singularity pull https://hoomd-blue.readthedocs.io/en/stable/installation.html)<br />
<br />
<br />
usage : sbatch singularity_hoomd_ex.run<br />
<br />
singularity_hoomd_ex.run example batch script (Remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_hoomd_ex"<br />
#SBATCH --output="singularity_hoomd_ex.%j.out"<br />
#SBATCH --error="singularity_hoomd_ex.%j.err"<br />
#SBATCH --nodes=1<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:10:00<br />
set -x<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
SCRIPT=/home/fs01/username/hoomd/first_test.py3<br />
<br />
# placeholder for debugging<br />
module load singularity<br />
which singularity<br />
set +x<br />
echo "next command to run: singularity exec --nv ${CONTAINER} python3 ${SCRIPT}"<br />
singularity exec --nv ${CONTAINER} python3 ${SCRIPT}<br />
<br />
<br />
## debugging commands can be inserted above<br />
#echo hostname<br />
#hostname<br />
#echo "lspci | grep -iE ' VGA |NVI'"<br />
#lspci | grep -iE ' VGA |NVI'<br />
#echo nvclock<br />
#nvclock<br />
#echo "which nvcc"<br />
#which nvcc<br />
#echo "nvcc --version"<br />
#nvcc --version<br />
#echo PATH<br />
#echo "$PATH"<br />
#echo LD_LIBRARY_PATH<br />
#echo "$LD_LIBRARY_PATH"<br />
<br />
</pre><br />
<br />
===== Example Singularity mpi test batch job =====<br />
<br />
usage : sbatch singularity_mpi_ex.run<br />
<br />
Example singularity_mpi_ex.run script (remember to replace 'username' with your CAC user name):<br />
<pre><br />
#!/bin/bash<br />
#SBATCH --job-name="singularity_mpi_ex"<br />
#SBATCH --output="singularity_mpi_ex.%j.out"<br />
#SBATCH --error="singularity_mpi_ex.%j.err"<br />
#SBATCH --nodes=5<br />
#SBATCH --ntasks-per-core=1<br />
#SBATCH --time=00:1:00<br />
<br />
CONTAINER=/home/fs01/username/hoomd/software.simg<br />
<br />
module load singularity<br />
<br />
echo "script job head node $(hostname)"<br />
<br />
# silence mpi component warnings<br />
export MPI_MCA_mca_base_component_show_load_errors=0<br />
export PMIX_MCA_mca_base_component_show_load_errors=0<br />
<br />
echo "next command to run: mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname"<br />
mpirun --mca btl self,tcp --mca btl_tcp_if_include eth2 singularity exec ${CONTAINER} hostname<br />
<br />
# clean up exit code , consult .out and .err files in working directory upon job completion for mpi debugging<br />
exit 0<br />
<br />
</pre><br />
<br />
===== Copy your data to /tmp to avoid heavy I/O from your nfs mounted $HOME !!! =====<br />
* We cannot stress enough how important this is to avoid delays on the file systems.<br />
<br />
<pre><br />
#!/bin/bash<br />
## -J sets the name of job<br />
#SBATCH -J TestJob<br />
<br />
## -p sets the partition (queue)<br />
#SBATCH -p normal<br />
## time is HH:MM:SS<br />
#SBATCH --time=00:01:30<br />
#SBATCH --cpus-per-task=15<br />
<br />
## define job stdout file<br />
#SBATCH -o testnormal-%j.out<br />
<br />
## define job stderr file<br />
#SBATCH -e testnormal-%j.err<br />
<br />
echo "starting $SLURM_JOBID at `date` on `hostname`"<br />
echo "my home dir is $HOME" <br />
<br />
## copying my data to a local tmp space on the compute node to reduce I/O<br />
MYTMP=/tmp/$USER/$SLURM_JOB_ID<br />
/usr/bin/mkdir -p $MYTMP || exit $?<br />
echo "Copying my data over..."<br />
cp -rp $SLURM_SUBMIT_DIR/mydatadir $MYTMP || exit $?<br />
<br />
## run your job executables here...<br />
<br />
echo "ended at `date` on `hostname`"<br />
echo "copy your data back to your $HOME" <br />
/usr/bin/mkdir -p $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
cp -rp $MYTMP $SLURM_SUBMIT_DIR/newdatadir || exit $?<br />
## remove your data from the compute node /tmp space<br />
rm -rf $MYTMP <br />
<br />
exit 0<br />
<br />
</pre><br />
<br />
Explanation: /tmp refers to a local directory that is found on each compute node. It is faster to use /tmp because when you read and write to it, the I/O does not have to go across the network, and it does not have to compete with the other users of a shared network drive (such as the one that holds everyone's /home).<br />
<br />
To look at files in /tmp while your job is running, you can ssh to the login node, then do a further ssh to the compute node that you were assigned. Then you can cd to /tmp on that node and inspect the files in there with <code>cat</code> or <code>less</code>.<br />
<br />
Note, if your application is producing 1000's of output files that you need to save, then it is far more efficient to put them all into a single tar or zip file before copying them into $HOME as the final step.<br />
<br />
==Software==<br />
=== Modules ===<br />
The 'lmod module' system is implemented for your use with listing and loading modules that will put you in the software environment needed.<br />
(For more information, type: '''module help''')<br />
<br />
To list current (default) modules loaded upon logging in:<br />
module list<br />
<br />
To list the available software and the software environment you can put yourself in according to what compiler is loaded in the above listing, type:<br />
module avail <br />
(to get a more complete listing, type: module spider)<br />
The software that is listed with "(L)" references what you have loaded. <br />
<br />
EXAMPLE:<br />
To be sure you are using the environment setup for cmake3, you would type:<br />
<pre><br />
module load cmake3<br />
module list (you will see cmake3 is loaded (L))<br />
* when done, either logout and log back in or type:<br />
module unload cmake3<br />
</pre><br />
To swap to a different set of modules per compiler, you can swap out your currently loaded compiler. <br />
<br />
EXAMPLE:<br />
<pre><br />
module swap gnu8 gnu7<br />
</pre><br />
You will then see a different set of available modules upon typing: module avail<br />
<br />
You can create your own modules and place them in your $HOME. <br />
Once created, type:<br />
module use $HOME/path/to/personal/modulefiles<br />
This will prepend the path to $MODULEPATH<br />
[type echo $MODULEPATH to confirm]<br />
<br />
Reference: [http://lmod.readthedocs.io/en/latest/020_advanced.html User Created Modules]<br />
<br />
:* It is usually possible to install software in your home directory.<br />
:* List installed software via rpms: '''rpm -qa'''. Use grep to search for specific software: rpm -qa | grep sw_name [i.e. rpm -qa | grep perl ]<br />
<br />
=== Build software from source into your home directory ($HOME) ===<br />
<pre><br />
* download and extract your source<br />
* cd to your extracted source directory<br />
./configure --./configure --prefix=$HOME/appdir<br />
[You need to refer to your source documentation to get the full list of options you can provide 'configure' with.]<br />
make<br />
make install<br />
<br />
The binary would then be located in ~/appdir/bin. <br />
* Add the following to your $HOME/.bashrc: <br />
export PATH="$HOME/appdir/bin:$PATH"<br />
* Reload the .bashrc file with source ~/.bashrc. (or logout and log back in)<br />
</pre><br />
<br />
=== Jupyter Notebook Server ===<br />
==== Installation ====<br />
Follow these steps to install and run your own Jupyter Notebook server on capecrystal.<br />
<br />
# Create a jupyter-notebook python virtual environment:<br />
:# Load <code>python/3.8.3</code> module: <code>module load python/3.8.3</code><br />
:# Create a jupyter-notebook python virtual environment: <code>python -m venv jupyter-notebook</code><br />
# Install jupyter notebook in the jupter-notebook virtual environment:<br />
:# Activate jupyter-notebook virtual environment: <code> source jupyter-notebook/bin/activate</code><br />
:# Install jupyter notebook: <code>pip install jupyter notebook</code><br />
<br />
==== Start the Jupyter Notebook Server on Compute Node ====<br />
Use these steps to start the Jupyter Notebook server on a compute node:<br />
<br />
* Obtain shell access to a compute node and identify the compute node:<br />
<pre><br />
-bash-4.2$ srun --pty /bin/bash<br />
bash-4.2$ hostname<br />
c0002<br />
</pre><br />
<br />
* Start jupyter notebook on the compute node:<br />
<pre><br />
module load python/3.8.3<br />
source jupyter-notebook/bin/activate<br />
jupyter notebook<br />
</pre><br />
<br />
==== Access the Notebook ====<br />
[[File:Jupyter notebook port forwarding.jpg|center]]<br />
<br />
* In a new terminal on your client, ssh to capecrystal head node. Establish an ssh tunnel between the head node's forwarding port (10000 in this example) and port 8888 on the compute node (c0002 in this example). ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 10000:localhost:8888 c0002<br />
</pre><br />
<br />
* In a new terminal on your client, establish an ssh tunnel between the client's source port (20000 in this example) and the forwarding port on the capecrytal head node. ''Leave this terminal open after the ssh tunnel is established''.<br />
<pre><br />
ssh -L 20000:localhost:10000 <user>@capecrystal.cac.cornell.edu<br />
</pre><br />
<br />
* On your client, point your web browser to:<br />
<pre><br />
http://localhost:20000<br />
</pre></div>Jhs43