Cluster 2: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
Our new cluster at [[UCSF]] is described on this page.
Our new cluster at [[UCSF]] is described on this page. '''STATUS''' We have begun accepting users.  We are actively consolidating software, data, and users from Clusters 0 and 1 onto this cluster. We expect 90% of Cluster 0 and Cluster 1 will be supported on Cluster 2 by July 1, 2014. When that happens, hardware will be migrated from Cluster 0 to Cluster 2 until Cluster 2 has 1000 cpu-cores and 200 TB of high quality disk and Cluster 0 has only 50 cpu-cores and 20 TB of disk by around Sept 1st, 2014. Cluster 0 will continue to function in this "legacy support" mode until 2015, at which point, we hope, it will be turned off completely when it is no longer needed.


= Status =
We are just about ready to start accepting users. The physical equipment in [[Cluster 0]] will be subsumed into this cluster when Cluster 2 replicates all the functions of Cluster 0. We expect this to happen later in 2014.
{{TOCright}}
{{TOCright}}



Revision as of 15:16, 14 May 2014

Our new cluster at UCSF is described on this page. STATUS We have begun accepting users. We are actively consolidating software, data, and users from Clusters 0 and 1 onto this cluster. We expect 90% of Cluster 0 and Cluster 1 will be supported on Cluster 2 by July 1, 2014. When that happens, hardware will be migrated from Cluster 0 to Cluster 2 until Cluster 2 has 1000 cpu-cores and 200 TB of high quality disk and Cluster 0 has only 50 cpu-cores and 20 TB of disk by around Sept 1st, 2014. Cluster 0 will continue to function in this "legacy support" mode until 2015, at which point, we hope, it will be turned off completely when it is no longer needed.

Priorities and Policies

Equipment

  • 512 cpu-cores for queued jobs and 128 cpu-cores for infrastructure, databases, management and ad hoc jobs. 128 TB of high quality disk, 32 TB of other disk
  • We expect this to grow to over 1200 cpu-cores and 200 TB in 2014 once Cluster 0 is merged with Cluster 2
  • Our policy is to have 4 GB RAM per cpu-core unless otherwise specified. Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
  • The Hebrew alphabet is used for physical machines, Greek for VMs. Functions (e.g. sgehead) are aliases (CNAMEs).
  • Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall). More racks will be added by July.
  • Central services are on aleph, an HP DL160G5 and bet, an HP xxxx.
  • CPU
    • 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
    • 1 Dell C6145 with 128 cores.
    • An HP DL165G7 (24-way) is sgehead
  • DISK
    • HP disks - new in 2014 - 40 TB RAID6 SAS
    • Silicon Mechanics NAS - new in 2014 - 76 TB RAID6 SAS
    • A HP DL160G5 and an MSA60 with 12 TB SAS - new in 2014.

Disk organization

  • shin aka nas1 mounted as /nfs/db/ = 72 TB SAS RAID6
  • bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
  • elated on happy: /nfs/work only as 36 TB SAS RAID6
  • het (43) aka former vmware2 MSA 60 exports /nfs/home and /nfs/soft

Special purpose machines - all .ucsf.bkslab.org

  • sgehead- We recommend you use sgehead - in addition to your desktop - for most purposes, including launching jobs on the cluster.
  • pgf - fortran compiler
  • portal - secure access from
  • ppilot - pipeline pilot
  • shin, bet, and dalet are the three NFS servers. You should not need to log in.
  • mysql1 - general purpose mysql server (like former scratch)
  • pg1 - general purpose postgres server
  • fprint - fingerprinting server

About our cluster