Cluster 7: Difference between revisions

From DISI
Jump to navigation Jump to search
m (asdf)
 
(14 intermediate revisions by one other user not shown)
Line 8: Line 8:
#Remote
#Remote
#: <source>ssh <user>@epsilon.compbio.ucsf.edu</source>
#: <source>ssh <user>@epsilon.compbio.ucsf.edu</source>
#On-premise
#: <source>ssh <user>@login02.compbio.ucsf.edu</source>


== SLURM Nodes ==
== SLURM Nodes ==
==== CPU ====
*To list partitions
*:<source>
sinfo
</source>
 
*To list all nodes and their information
*:<source>
sinfo -lNe
</source>
==== CPU Servers ====
*cpu02 (128 Cores, 256 Threads, 1TB RAM)
*cpu[03-17] (48 Cores, 96 Threads, 256GB RAM)


==== GPU ====
==== GPU Servers ====
gpu01
*gpu01 (48 Cores, 96 Threads, 758GB RAM, 8 x RTX 2080 Ti)
*gpu02 (48 Cores, 96 Threads, 758GB RAM, 4 x RTX 3090)


==== NFS Servers ====
*home01
*hdd02


== Global Modules ==
== Global Modules/Software ==
*To check the list of available modules
*To check the list of available modules
*:<source>// Long version
*:<source>
/* Long version */
module available  
module available  


/* OR */
/* OR */


// Short version
/* Short version */
ml av  
ml av  
</source>
</source>
* To load a module(s)
* To load a module(s)
*: <source>// Single module, Long version
*: <source>
/* Single module */
module load dock  
module load dock  


/* OR */
/* OR */


// Single module, Short version
ml dock  
ml dock  


==================================
==================================


// Multi-module, Long version
/* Multi-module */
module load dock python schrodinger
module load dock python/3.12, schrodinger


/* OR */
/* OR */


// Multi-module, Short version
ml dock python/3.12 schrodinger
ml dock python schrodinger
</source>
</source>
*To list all loaded modules
*: <source>
module load
/* OR */
ml
</source>
*To unload a module
*:<source>
module unload <module>
/* OR */
ml unload <module>
</source>
*To unload all modules
*:<source>
module purge
/* OR */
ml purge
</source>
== Guidelines and Recommendations ==
* /nfs/home is for code (backed up to github or gitlab)
* /nfs/hdd01/work/<yourid> and /nfs/hdd02/work/<yourid> are good places to keep docking jobs. write to your sysadmin to set up a folder for you.
* jobs under slurm should write to local /scratch during the job, move back to your work area and clean up when job ends
*
[[Category:C7]]

Latest revision as of 18:43, 20 March 2026

Introduction

A cluster built on Rocky 9 linux distribution which is under the RHEL umbrella.

How to Request for Access

Contact a system administrator.

How to Login

  1. Remote
    ssh <user>@epsilon.compbio.ucsf.edu

SLURM Nodes

  • To list partitions
    sinfo
  • To list all nodes and their information
    sinfo -lNe

CPU Servers

  • cpu02 (128 Cores, 256 Threads, 1TB RAM)
  • cpu[03-17] (48 Cores, 96 Threads, 256GB RAM)

GPU Servers

  • gpu01 (48 Cores, 96 Threads, 758GB RAM, 8 x RTX 2080 Ti)
  • gpu02 (48 Cores, 96 Threads, 758GB RAM, 4 x RTX 3090)

NFS Servers

  • home01
  • hdd02

Global Modules/Software

  • To check the list of available modules
    /* Long version */
    module available 
    
    /* OR */
    
    /* Short version */
    ml av
  • To load a module(s)
    /* Single module */
    module load dock 
    
    /* OR */
    
    ml dock 
    
    ==================================
    
    /* Multi-module */
    module load dock python/3.12, schrodinger
    
    /* OR */
    
    ml dock python/3.12 schrodinger
  • To list all loaded modules
    module load
    
    /* OR */
    
    ml
  • To unload a module
    module unload <module>
    /* OR */
    ml unload <module>
  • To unload all modules
    module purge 
    /* OR */
    ml purge


Guidelines and Recommendations

  • /nfs/home is for code (backed up to github or gitlab)
  • /nfs/hdd01/work/<yourid> and /nfs/hdd02/work/<yourid> are good places to keep docking jobs. write to your sysadmin to set up a folder for you.
  • jobs under slurm should write to local /scratch during the job, move back to your work area and clean up when job ends