Cluster 7: Difference between revisions
Jump to navigation
Jump to search
Jgutierrez6 (talk | contribs) m (→SLURM Nodes) |
Jgutierrez6 (talk | contribs) mNo edit summary |
||
| Line 8: | Line 8: | ||
#Remote | #Remote | ||
#: <source>ssh <user>@epsilon.compbio.ucsf.edu</source> | #: <source>ssh <user>@epsilon.compbio.ucsf.edu</source> | ||
== SLURM Nodes == | == SLURM Nodes == | ||
Revision as of 23:48, 3 February 2026
Introduction
A cluster built on Rocky 9 linux distribution which is under the RHEL umbrella.
How to Request for Access
Contact a system administrator.
How to Login
- Remote
ssh <user>@epsilon.compbio.ucsf.edu
SLURM Nodes
- To list partitions
sinfo
- To list all nodes and their information
sinfo -lNe
CPU Servers
- cpu02 (128 Cores, 256 Threads, 1TB RAM)
GPU Servers
- gpu01 (48 Cores, 96 Threads, 758TB RAM, 8 x RTX 2080 Ti)
NFS Servers
- home01
- hdd02
Global Modules/Software
- To check the list of available modules
/* Long version */ module available /* OR */ /* Short version */ ml av
- To load a module(s)
/* Single module */ module load dock /* OR */ ml dock ================================== /* Multi-module */ module load dock python/3.12, schrodinger /* OR */ ml dock python/3.12 schrodinger
- To list all loaded modules
module load /* OR */ ml
- To unload a module
module unload <module> /* OR */ ml unload <module>
- To unload all modules
module purge /* OR */ ml purge