Cluster 2: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
 
(33 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Our new cluster at [[UCSF]] is described on this page.  The physical equipment in cluster [[Cluster 0]] will be subsumed into this cluster when it replicates all the functions of the original. We expect this to happen later in 2014.
This is the default lab cluster.
 
{{TOCright}}
{{TOCright}}


Line 6: Line 6:
* [[Lab Security Policy]]
* [[Lab Security Policy]]
* [[Disk space policy]]
* [[Disk space policy]]
* [[Cluster 2 account]]
* [[Backups]] policy.
* [[Portal system]] for off-site ssh cluster access.
* Get a [[Cluster 2 account]] and get started


= Equipment, names, roles =
= Special machines =  
* The Hebrew alphabet is used for physical machines, Greek for VMs. Functions (e.g. sgehead) are aliases (CNAMEs).
Normally, you will just ssh to sgehead aka gimel from portal.ucsf.bkslab.org where you can do almost anything, including job management.  A few things require licensing and must be done on special machines.
* Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall). '''More racks will be added by July.'''
* Core services are on aleph, an HP DL160G5. Using a libvirt hypervisor, aleph runs all core services.
* There are 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
* An HP DL165G7 (24-way) is sgehead
* HP disks - new in 2014 - 40 TB RAID6 SAS
* Silicon Mechanics NAS - new in 2014 - 76 TB RAID6 SAS
* an HP DL160G5 and an MSA60 with 12 TB SAS - new in 2014.
* A Dell C6145 with 128 cores.  
* Current total of 512 cores for queued jobs and 128 cores for infrastructure, databases, management and ad hoc jobs.
   
= Disk organization =
* shin aka nas1 mounted as /nfs/db/ =  72 TB SAS RAID6
* bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
* elated on happy: /nfs/work only as 36 TB SAS RAID6
* het (43) aka  former vmware2 MSA 60  exports /nfs/home and /nfs/soft


= Getting started =
hypervisor 'he' hosts:
* alpha  - which is critical and runs foreman, DNS, DHCP, and other important services
* beta - with runs LDAP authentication
* epsilon - portal.ucsf.bkslab.org - cluster gateway from public internet
* gamma - sun grid engine qmaster
* phi - mysqld/excipients
* psi for using the PG fortran compiler
* ppilot is at  http://zeta:9944/ - you must be on the Cluster 2 private network to use it
* Tau is the web server for ZINC,
* no other specia
* zeta - Psicquic/pipeline pilot
* Sigma can definitely go off and stay off. It was planned for a fingerprinting server, never done.


hypervisor 'aleph2' hosts:
* alpha7 - This is to be the future architecture VM of the cluster (DNS/DHCP/Puppet/Foreman/Ansible).  CentOS7. 
* kappa is licensing. ask me.  ("i have no clue what this licenses.  Turned off." - ben)
* rho contains this wiki and also bkslab.org


= Roles =  
= Notes =  
* to get from SVN, use svn ssh+svn


= General =  
= Hardware and physical location =
* '''sgehead''' - access to the cluster from within the lab
* 1856 cpu-cores for queued jobs
** pgf fortran compiler
* 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
** submit jobs to queue
* 788 TB of high quality NFS-available disk
* '''portal''' - access to the cluster from off campus
* Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
* ppilot - our pipeline pilot license will be transferred here
* Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
* www - static webserver VM
* Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
* dock - dock licensing VM
* Central services are on he,aleph2,and bet
* drupal -
* CPU
* wordpress -
** 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
* public - runs public services ZINC, DOCK Blaster, SEA, DUDE
** 1 Dell C6145 with 128 cores.
* happy - postgres production server
** An HP DL165G7 (24-way) is sgehead
* ark - intern psql, like raiders in yyz
** more computers to come from Cluster 0, when Cluster 2 is fully ready.
* nfs1 - disk server 1
* DISK
* nfs2 - disk server 2
** HP disks - 40 TB RAID6 SAS (new in 2014)
* nfs3 - disk server 3
** Silicon Mechanics NAS - new in 2014 - 77 TB RAID6 SAS (new in 2014)
* fprint - fingerprinting server
** A HP DL160G5 and an MSA60 with 12 TB SAS (disks new in 2014)


= Services =
= Naming convention
* aleph - VM running core administrative functions
* The Hebrew alphabet is used for physical machines
* bet -
* Greek letters for VMs.
* gimel -
* Functions (e.g. sgehead) are aliases (CNAMEs).
* dalet -
* compbio.ucsf.edu and ucsf.bkslab.org domains both supported.
* he -
* vav -
* zayin -


= Disk organization =
* shin aka nas1 mounted as /nfs/db/ =  72 TB SAS RAID6.  NOTE: ON BAND:  $ sudo /usr/local/RAID\ Web\ Console\ 2/startupui.sh to interact with raid controller.  username: raid.  pw: c2 pass
* bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
* elated on happy: /nfs/work only as 36 TB SAS RAID6
* dalet exports /nfs/home & /nfs/home2


== SEA server ==
= Special purpose machines - all .ucsf.bkslab.org =
* fawlty
* mysql server is on msqlserver aka inception
* sgehead aka gimel.cluster - nearly the only machine you'll need.
* fingerprint server is on fingerprint aka darkcrystal
* psi.cluster - PG fortran compiler (if it only has a .cluster address means it has no public address)
 
* portal aka epsilon - secure access
 
* zeta.cluster - Pipeline Pilot
= By rack =
* shin, bet, and dalet are the three NFS servers. You should not need to log in to them.
== Rack 0 - 10.20.0.* ==  
Location BH101, column 7 row 5
* aleph
* bet
* happy
 
== Rack 1 - 10.20.10.* ==
Location: BH101, column 1 row 0
*  
*  
 
== Rack 2 - 10.20.30.* ==
Location: BH
 
 
= how to administer DHCP / DNS in BH101


https://www.cgl.ucsf.edu/dns_dhcp/
on teague desktop, /usr/local/RAID Web Console 2/startupui.sh
connect to shin on public network
raid / C2 on shin


* mysql1.cluster - general purpose mysql server (like former scratch)
* pg1.cluster - general purpose postgres server
* fprint.cluster - fingerprinting server


[[About our cluster]]
[[About our cluster]]

Latest revision as of 00:43, 8 January 2019

This is the default lab cluster.

Priorities and Policies

Special machines

Normally, you will just ssh to sgehead aka gimel from portal.ucsf.bkslab.org where you can do almost anything, including job management. A few things require licensing and must be done on special machines.

hypervisor 'he' hosts:

  • alpha - which is critical and runs foreman, DNS, DHCP, and other important services
  • beta - with runs LDAP authentication
  • epsilon - portal.ucsf.bkslab.org - cluster gateway from public internet
  • gamma - sun grid engine qmaster
  • phi - mysqld/excipients
  • psi for using the PG fortran compiler
  • ppilot is at http://zeta:9944/ - you must be on the Cluster 2 private network to use it
  • Tau is the web server for ZINC,
  • no other specia
  • zeta - Psicquic/pipeline pilot
  • Sigma can definitely go off and stay off. It was planned for a fingerprinting server, never done.

hypervisor 'aleph2' hosts:

  • alpha7 - This is to be the future architecture VM of the cluster (DNS/DHCP/Puppet/Foreman/Ansible). CentOS7.
  • kappa is licensing. ask me. ("i have no clue what this licenses. Turned off." - ben)
  • rho contains this wiki and also bkslab.org

Notes

  • to get from SVN, use svn ssh+svn

Hardware and physical location

  • 1856 cpu-cores for queued jobs
  • 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
  • 788 TB of high quality NFS-available disk
  • Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
  • Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
  • Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
  • Central services are on he,aleph2,and bet
  • CPU
    • 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
    • 1 Dell C6145 with 128 cores.
    • An HP DL165G7 (24-way) is sgehead
    • more computers to come from Cluster 0, when Cluster 2 is fully ready.
  • DISK
    • HP disks - 40 TB RAID6 SAS (new in 2014)
    • Silicon Mechanics NAS - new in 2014 - 77 TB RAID6 SAS (new in 2014)
    • A HP DL160G5 and an MSA60 with 12 TB SAS (disks new in 2014)

= Naming convention

  • The Hebrew alphabet is used for physical machines
  • Greek letters for VMs.
  • Functions (e.g. sgehead) are aliases (CNAMEs).
  • compbio.ucsf.edu and ucsf.bkslab.org domains both supported.

Disk organization

  • shin aka nas1 mounted as /nfs/db/ = 72 TB SAS RAID6. NOTE: ON BAND: $ sudo /usr/local/RAID\ Web\ Console\ 2/startupui.sh to interact with raid controller. username: raid. pw: c2 pass
  • bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
  • elated on happy: /nfs/work only as 36 TB SAS RAID6
  • dalet exports /nfs/home & /nfs/home2

Special purpose machines - all .ucsf.bkslab.org

  • sgehead aka gimel.cluster - nearly the only machine you'll need.
  • psi.cluster - PG fortran compiler (if it only has a .cluster address means it has no public address)
  • portal aka epsilon - secure access
  • zeta.cluster - Pipeline Pilot
  • shin, bet, and dalet are the three NFS servers. You should not need to log in to them.
on teague desktop, /usr/local/RAID Web Console 2/startupui.sh 
connect to shin on public network
raid /  C2 on shin
  • mysql1.cluster - general purpose mysql server (like former scratch)
  • pg1.cluster - general purpose postgres server
  • fprint.cluster - fingerprinting server

About our cluster