Cluster 2: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
Line 6: Line 6:
* [[Lab Security Policy]]
* [[Lab Security Policy]]
* [[Disk space policy]]
* [[Disk space policy]]
* Get a [[Cluster 2 account]]
* Get a [[Cluster 2 account]] and get started


= Equipment, names, roles =
= Equipment, names, roles =
Line 26: Line 26:
* het (43) aka  former vmware2 MSA 60  exports /nfs/home and /nfs/soft
* het (43) aka  former vmware2 MSA 60  exports /nfs/home and /nfs/soft


= Getting started =  
= Special purpose machines - all .ucsf.bkslab.org =
 
   
 
* sgehead- we recommend you use this - in addition to your desktop - for most purposes, including launching jobs on the cluster.
  = Roles =
* pgf - fortran compiler
 
* portal - secure access from  
= General =
* ppilot - pipeline pilot
* '''sgehead''' - access to the cluster from within the lab
* shin, bet, and dalet are the three NFS servers. You should not need to log in.
** pgf fortran compiler
* mysql1 - general purpose mysql server (like former scratch)
** submit jobs to queue
* pg1 - general purpose postgres server  
* '''portal''' - access to the cluster from off campus
* ppilot - our pipeline pilot license will be transferred here
* www - static webserver VM
* dock - dock licensing VM
* drupal -
* wordpress -
* public - runs public services ZINC, DOCK Blaster, SEA, DUDE
* happy - postgres production server
* ark - intern psql, like raiders in yyz
* nfs1 - disk server 1
* nfs2 - disk server 2
* nfs3 - disk server 3
* fprint - fingerprinting server
* fprint - fingerprinting server
= Services =
* aleph - VM running core administrative functions
* bet -
* gimel -
* dalet -
* he -
* vav -
* zayin -
== SEA server ==
* fawlty
* mysql server is on msqlserver aka inception
* fingerprint server is on fingerprint aka darkcrystal
= By rack =
== Rack 0 - 10.20.0.* ==
Location BH101, column 7 row 5
* aleph
* bet
* happy
== Rack 1 - 10.20.10.* ==
Location: BH101, column 1 row 0
*
*
== Rack 2 - 10.20.30.* ==
Location: BH
= how to administer DHCP / DNS in BH101
https://www.cgl.ucsf.edu/dns_dhcp/


[[About our cluster]]
[[About our cluster]]

Revision as of 14:25, 23 April 2014

Our new cluster at UCSF is described on this page. The physical equipment in cluster Cluster 0 will be subsumed into this cluster when it replicates all the functions of the original. We expect this to happen later in 2014.

Priorities and Policies

Equipment, names, roles

  • The Hebrew alphabet is used for physical machines, Greek for VMs. Functions (e.g. sgehead) are aliases (CNAMEs).
  • Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall). More racks will be added by July.
  • Core services are on aleph, an HP DL160G5. Using a libvirt hypervisor, aleph runs all core services.
  • There are 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
  • An HP DL165G7 (24-way) is sgehead
  • HP disks - new in 2014 - 40 TB RAID6 SAS
  • Silicon Mechanics NAS - new in 2014 - 76 TB RAID6 SAS
  • an HP DL160G5 and an MSA60 with 12 TB SAS - new in 2014.
  • A Dell C6145 with 128 cores.
  • Current total of 512 cores for queued jobs and 128 cores for infrastructure, databases, management and ad hoc jobs.

Disk organization

  • shin aka nas1 mounted as /nfs/db/ = 72 TB SAS RAID6
  • bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
  • elated on happy: /nfs/work only as 36 TB SAS RAID6
  • het (43) aka former vmware2 MSA 60 exports /nfs/home and /nfs/soft

Special purpose machines - all .ucsf.bkslab.org

  • sgehead- we recommend you use this - in addition to your desktop - for most purposes, including launching jobs on the cluster.
  • pgf - fortran compiler
  • portal - secure access from
  • ppilot - pipeline pilot
  • shin, bet, and dalet are the three NFS servers. You should not need to log in.
  • mysql1 - general purpose mysql server (like former scratch)
  • pg1 - general purpose postgres server
  • fprint - fingerprinting server

About our cluster