Cluster 2: Difference between revisions
Line 55: | Line 55: | ||
= Special purpose machines - all .ucsf.bkslab.org = | = Special purpose machines - all .ucsf.bkslab.org = | ||
* sgehead- | * sgehead aka gimel.cluster - nearly the only machine you'll need. | ||
* psi - | * psi.cluster - PG fortran compiler (if it only has a .cluster address means it has no public address) | ||
* epsilon | * portal aka epsilon - secure access | ||
* zeta | * zeta.cluster - Pipeline Pilot | ||
* shin, bet, and dalet are the three NFS servers. You should not need to log in to them. | * shin, bet, and dalet are the three NFS servers. You should not need to log in to them. | ||
* mysql1 - general purpose mysql server (like former scratch) | * mysql1.cluster - general purpose mysql server (like former scratch) | ||
* pg1 - general purpose postgres server | * pg1.cluster - general purpose postgres server | ||
* fprint - fingerprinting server | * fprint.cluster - fingerprinting server | ||
[[About our cluster]] | [[About our cluster]] |
Revision as of 16:53, 26 May 2014
Our new cluster at UCSF is described on this page. STATUS We have begun accepting users. We are actively consolidating software, data, and users from Clusters 0 and 1 onto this cluster. We expect 90% of Cluster 0 and Cluster 1 will be supported on Cluster 2 by July 1, 2014. When that happens, hardware will be migrated from Cluster 0 to Cluster 2 until Cluster 2 has 1000 cpu-cores and 200 TB of high quality disk and Cluster 0 has only 50 cpu-cores and 20 TB of disk by around Sept 1st, 2014. Cluster 0 will continue to function in this "legacy support" mode until 2015, at which point, we hope, it will be turned off completely when it is no longer needed.
Priorities and Policies
- Lab Security Policy
- Disk space policy
- Backups policy.
- Portal system for off-site ssh cluster access.
- Get a Cluster 2 account and get started
Special machines
Normally, you will just ssh to sgehead aka gimel from portal.ucsf.bkslab.org where you can do almost anything, including job management. A few things require licensing and must be done on special machines.
- psi for using the PG fortran compiler
- ppilot is at http://zeta:9944/ - you must be on the Cluster 2 private network to use it
- no other special machines
Notes
- to get from SVN, use svn ssh+svn
Hardware and physical location
- 512 cpu-cores for queued jobs
- 128 cpu-cores for infrastructure, databases, management and ad hoc jobs.
- 128 TB of high quality NFS-available disk
- 32 TB of other disk
- We expect this to grow to over 1200 cpu-cores and 200 TB in late 2014 once Cluster 0 is merged with Cluster 2
- Our policy is to have 4 GB RAM per cpu-core unless otherwise specified.
- Machines older than 3 years may have 2GB/core and 6 years old have 1GB/core.
- Cluster 2 is currently stored entirely in Rack 0 which is in Row 0, Position 4 of BH101 at 1700 4th St (Byers Hall).
- More racks will be added (from cluster 0) in summer 2014.
- Central services are on aleph, an HP DL160G5 and bet, an HP xxxx.
- CPU
- 3 Silicon Mechanics Rackform nServ A4412.v4 s, each comprising 4 computers of 32 cpu-cores for a total of 384 cpu-cores.
- 1 Dell C6145 with 128 cores.
- An HP DL165G7 (24-way) is sgehead
- more computers to come from Cluster 0, when Cluster 2 is fully ready.
- DISK
- HP disks - 40 TB RAID6 SAS (new in 2014)
- Silicon Mechanics NAS - new in 2014 - 77 TB RAID6 SAS (new in 2014)
- A HP DL160G5 and an MSA60 with 12 TB SAS (disks new in 2014)
= Naming convention
- The Hebrew alphabet is used for physical machines
- Greek letters for VMs.
- Functions (e.g. sgehead) are aliases (CNAMEs).
- compbio.ucsf.edu and ucsf.bkslab.org domains both supported.
Disk organization
- shin aka nas1 mounted as /nfs/db/ = 72 TB SAS RAID6
- bet aka happy, internal: /nfs/store and psql (temp) as 10 TB SATA RAID10
- elated on happy: /nfs/work only as 36 TB SAS RAID6
- het (43) aka former vmware2 MSA 60 exports /nfs/home and /nfs/soft
Special purpose machines - all .ucsf.bkslab.org
- sgehead aka gimel.cluster - nearly the only machine you'll need.
- psi.cluster - PG fortran compiler (if it only has a .cluster address means it has no public address)
- portal aka epsilon - secure access
- zeta.cluster - Pipeline Pilot
- shin, bet, and dalet are the three NFS servers. You should not need to log in to them.
- mysql1.cluster - general purpose mysql server (like former scratch)
- pg1.cluster - general purpose postgres server
- fprint.cluster - fingerprinting server