Cluster 2: Difference between revisions

From DISI
Jump to navigation Jump to search
(asdf)
Line 21: Line 21:


= Hardware and physical location =
= Hardware and physical location =
* 512 cpu-cores for queued jobs
* 1232 cpu-cores for queued jobs
* 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
* 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
* 128 TB of high quality NFS-available disk
* 128 TB of high quality NFS-available disk
* 32 TB of other disk
* 32 TB of other disk
* We expect this to grow to over 1200 cpu-cores and 200 TB in late 2014 once Cluster 0 is merged with Cluster 2  
* We expect this to grow to over 1500 cpu-cores and 200 TB in late 2016 once Cluster 0 is merged with Cluster 2  
* Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
* Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
* Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
* Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
* Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
* Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
* '''More racks will be added (from cluster 0) in summer 2014.'''
* '''More racks will be added (from cluster 0) in summer 2016.'''
* Central services are on aleph, an HP DL160G5 and bet, an HP xxxx.  
* Central services are on aleph, an HP DL160G5 and bet, an HP xxxx.  
* CPU
* CPU

Revision as of 18:24, 30 March 2016

This is the default lab cluster.

Priorities and Policies

Special machines

Normally, you will just ssh to sgehead aka gimel from portal.ucsf.bkslab.org where you can do almost anything, including job management. A few things require licensing and must be done on special machines.

  • psi for using the PG fortran compiler
  • ppilot is at http://zeta:9944/ - you must be on the Cluster 2 private network to use it
  • no other special machines

Notes

  • to get from SVN, use svn ssh+svn

Hardware and physical location

  • 1232 cpu-cores for queued jobs
  • 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
  • 128 TB of high quality NFS-available disk
  • 32 TB of other disk
  • We expect this to grow to over 1500 cpu-cores and 200 TB in late 2016 once Cluster 0 is merged with Cluster 2
  • Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
  • Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
  • Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
  • More racks will be added (from cluster 0) in summer 2016.
  • Central services are on aleph, an HP DL160G5 and bet, an HP xxxx.
  • CPU
    • 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
    • 1 Dell C6145 with 128 cores.
    • An HP DL165G7 (24-way) is sgehead
    • more computers to come from Cluster 0, when Cluster 2 is fully ready.
  • DISK
    • HP disks - 40 TB RAID6 SAS (new in 2014)
    • Silicon Mechanics NAS - new in 2014 - 77 TB RAID6 SAS (new in 2014)
    • A HP DL160G5 and an MSA60 with 12 TB SAS (disks new in 2014)

= Naming convention

  • The Hebrew alphabet is used for physical machines
  • Greek letters for VMs.
  • Functions (e.g. sgehead) are aliases (CNAMEs).
  • compbio.ucsf.edu and ucsf.bkslab.org domains both supported.

Disk organization

  • shin aka nas1 mounted as /nfs/db/ = 72 TB SAS RAID6
  • bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
  • elated on happy: /nfs/work only as 36 TB SAS RAID6
  • het (43) aka former vmware2 MSA 60 exports /nfs/home and /nfs/soft

Special purpose machines - all .ucsf.bkslab.org

  • sgehead aka gimel.cluster - nearly the only machine you'll need.
  • psi.cluster - PG fortran compiler (if it only has a .cluster address means it has no public address)
  • portal aka epsilon - secure access
  • zeta.cluster - Pipeline Pilot
  • shin, bet, and dalet are the three NFS servers. You should not need to log in to them.
  • mysql1.cluster - general purpose mysql server (like former scratch)
  • pg1.cluster - general purpose postgres server
  • fprint.cluster - fingerprinting server

About our cluster