Cluster 7: Difference between revisions

From DISI
Jump to navigation Jump to search
 
(4 intermediate revisions by the same user not shown)
Line 21: Line 21:
sinfo -lNe
sinfo -lNe
</source>
</source>
==== CPUs ====
==== CPU Servers ====
*cpu02 (128 Cores, 256 Threads, 1TB RAM)


==== GPUs ====
==== GPU Servers ====
gpu01
*gpu01 (48 Cores, 96 Threads, 758TB RAM, 8 x RTX 2080 Ti)
 
==== NFS Servers ====
*home01
*hdd02


== Global Modules/Software ==
== Global Modules/Software ==

Latest revision as of 20:54, 3 February 2026

Introduction

A cluster built on Rocky 9 linux distribution which is under the RHEL umbrella.

How to Request for Access

Contact a system administrator.

How to Login

  1. Remote
    ssh <user>@epsilon.compbio.ucsf.edu
  2. On-premise
    ssh <user>@login02.compbio.ucsf.edu

SLURM Nodes

  • To list partitions
    sinfo
  • To list all nodes and their information
    sinfo -lNe

CPU Servers

  • cpu02 (128 Cores, 256 Threads, 1TB RAM)

GPU Servers

  • gpu01 (48 Cores, 96 Threads, 758TB RAM, 8 x RTX 2080 Ti)

NFS Servers

  • home01
  • hdd02

Global Modules/Software

  • To check the list of available modules
    /* Long version */
    module available 
    
    /* OR */
    
    /* Short version */
    ml av
  • To load a module(s)
    /* Single module */
    module load dock 
    
    /* OR */
    
    ml dock 
    
    ==================================
    
    /* Multi-module */
    module load dock python/3.12, schrodinger
    
    /* OR */
    
    ml dock python/3.12 schrodinger
  • To list all loaded modules
    module load
    
    /* OR */
    
    ml
  • To unload a module
    module unload <module>
    /* OR */
    ml unload <module>
  • To unload all modules
    module purge 
    /* OR */
    ml purge