Cluster 7
Jump to navigation
Jump to search
Introduction
A cluster built on Rocky 9 linux distribution which is under the RHEL umbrella.
How to Request for Access
Contact a system administrator.
How to Login
- Remote
ssh <user>@epsilon.compbio.ucsf.edu
SLURM Nodes
- To list partitions
sinfo
- To list all nodes and their information
sinfo -lNe
CPU Servers
- cpu02 (128 Cores, 256 Threads, 1TB RAM)
- cpu[03-17] (48 Cores, 96 Threads, 256GB RAM)
GPU Servers
- gpu01 (48 Cores, 96 Threads, 758GB RAM, 8 x RTX 2080 Ti)
- gpu02 (48 Cores, 96 Threads, 758GB RAM, 4 x RTX 3090)
NFS Servers
- home01
- hdd02
Global Modules/Software
- To check the list of available modules
/* Long version */ module available /* OR */ /* Short version */ ml av
- To load a module(s)
/* Single module */ module load dock /* OR */ ml dock ================================== /* Multi-module */ module load dock python/3.12, schrodinger /* OR */ ml dock python/3.12 schrodinger
- To list all loaded modules
module load /* OR */ ml
- To unload a module
module unload <module> /* OR */ ml unload <module>
- To unload all modules
module purge /* OR */ ml purge
Guidelines and Recommendations
- /nfs/home is for code (backed up to github or gitlab)
- /nfs/hdd01/work/<yourid> and /nfs/hdd02/work/<yourid> are good places to keep docking jobs. write to your sysadmin to set up a folder for you.
- jobs under slurm should write to local /scratch during the job, move back to your work area and clean up when job ends