Cluster 8: Difference between revisions

From DISI
Jump to navigation Jump to search
m (asdf)
m (asdf)
 
(2 intermediate revisions by the same user not shown)
Line 7: Line 7:


* We collectively refer to FAC/coreHPC as "Cluster 8".  
* We collectively refer to FAC/coreHPC as "Cluster 8".  
* our strategy with Cluster 8 is to upload ZINC on the fly, dock it, and then throw it away.  
* We are attempting two strategies for cluster 8:
* ZINC thus lives on our cluster (Cluster 7, and legacy Cluster 2)
** upload ZINC on the fly, dock it, and then throw it away.  
* The pipe to FAC is  100 Gbps, which allows 1 TB in 3 minutes, and 3 PB is 6 days.  
** nfs mount our cluster on coreHPC, data gets pulled as needed
 
* ZINC thus lives on our cluster (currently hosted on cluster 2, cross mounted read only to cluster 7).
* a copy of 50% of ZINC lives on Wynton/cluster 3.
* The pipe to FAC is  100 Gbps. Assming we get 50% of that, allows 1 TB in 3 minutes, and 3 PB is 6 days.  
* thus, ftp/dock/delete.  
* thus, ftp/dock/delete.  
* to get to corehpc
ssh <user>@chpc-ucsf-bastion-vm1.corehpc.ucsf.edu
* to get to slurm head node
ssh chpc-ucsf-login-vm1
* disks are
/mnt/scratch/user/<user>.
/mnt/home/<user>
/home/remote/<user>  ; you land here when you login
== Learn more ==
https://wiki.library.ucsf.edu/spaces/CHPC/pages/720396955/CoreHPC+Access+Primer
https://it.ucsf.edu/service/corehpc
https://wiki.library.ucsf.edu/spaces/CHPC/pages/736476672/CoreHPC+Submission+Examples
https://wiki.library.ucsf.edu/spaces/CHPC/pages/751928275/CoreHPC+Technical+Overview
Anyone who uses coreHPC, please take notes, update this and other pages, and teach the rest of the lab how to use it. Thank you.


[[Category:Cluster]]
[[Category:Cluster]]

Latest revision as of 19:06, 19 February 2026

We are still learning about FAC/coreHPC.

you will need to get separate (?) FAC and coreHPC accounts.

  • FAC is for storage
  • coreHPC is for compute
  • it is more complicated than that, but it is a reasonable simplification.
  • We collectively refer to FAC/coreHPC as "Cluster 8".
  • We are attempting two strategies for cluster 8:
    • upload ZINC on the fly, dock it, and then throw it away.
    • nfs mount our cluster on coreHPC, data gets pulled as needed
  • ZINC thus lives on our cluster (currently hosted on cluster 2, cross mounted read only to cluster 7).
  • a copy of 50% of ZINC lives on Wynton/cluster 3.
  • The pipe to FAC is 100 Gbps. Assming we get 50% of that, allows 1 TB in 3 minutes, and 3 PB is 6 days.
  • thus, ftp/dock/delete.
  • to get to corehpc
ssh <user>@chpc-ucsf-bastion-vm1.corehpc.ucsf.edu
  • to get to slurm head node
ssh chpc-ucsf-login-vm1
  • disks are
/mnt/scratch/user/<user>. 
/mnt/home/<user> 
/home/remote/<user>  ; you land here when you login

Learn more

https://wiki.library.ucsf.edu/spaces/CHPC/pages/720396955/CoreHPC+Access+Primer
https://it.ucsf.edu/service/corehpc
https://wiki.library.ucsf.edu/spaces/CHPC/pages/736476672/CoreHPC+Submission+Examples
https://wiki.library.ucsf.edu/spaces/CHPC/pages/751928275/CoreHPC+Technical+Overview

Anyone who uses coreHPC, please take notes, update this and other pages, and teach the rest of the lab how to use it. Thank you.