Cluster 0: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
This page is about our legacy cluster at [[Mission Bay]].  We also have a [[Cluster 1]] at [[University of Toronto]] and a new cluster [[Cluster 2]] at [[UCSF]].
This page is about our legacy cluster at [[Mission Bay]].  We also have a [[Cluster 1]] at [[University of Toronto]] and a new cluster [[Cluster 2]] at [[UCSF]].


= Getting started on the cluster =
= Priorities and Policies =  
 
Cluster 0 is a legacy cluster than will disappear. It predates many of the policies below. To this extent it is feasible, we intend to apply these policies retroactively to Cluster 0. However, the main idea is to get people off Cluster 0 asap and on to Cluster 2.
* 1. request an account from Therese or John.
* 2. Your home is on /raid1/people/<your_id>/. This area is backed up and is for important persistent files.
* 3. You should run docking jobs and other intense calculations in ~/work/, which Therese will set up for you and is generally not your home directory.
* 4. You should keep static data (e.g. crystallography data, results of published papers) in ~/store/ which is generally not your home directory.
* 5. Lab guests get 100GB in each of these areas, and lab members get 500GB. Ask if you need more.
* 6. If you go over your limit, you get emails for 2 weeks, then we impose a hard limit if you have not solved your overage.
* 7. You can choose bash or tcsh to be your default shell. We don't care. Everything should work equally well with both.
* 8. There is a special kind of static data, databases, for which you may request space. They will go in /nfs/db/<db_name>/. e.g. /nfs/db/zinc/ and /nfs/db/dude/ and /nfs/db/pdb and so on.
* 9. Please run large docking jobs on /nfs/work and not on /nfs/store or /nfs/home. When you publish a paper, please delete what you can, compress the rest, and move it to /store/. Do not leave it on /work/ if you are no longer using it actively.
* 10. Set up your account so that you can log in all across the cluster without a password. For instructions on how to securely generate ssh keys go here: http://wiki.uoft.bkslab.org/index.php/How_to_generate_ssh_keys_securely
* 11. Software lives in /nfs/software/. All our machines are 64 bit Centos 6.3 unless otherwise indicated.
* 12. Python 2.7 and 3.0 are installed. We currently recommend 2.7 because of library availability, but that may change soon. (Aug 2012)
* 13. If you use tcsh, copy .login and .cshrc from ~jji/  ; If you use bash, copy .bash_profile from ~jji/
 
 
* 1. cp /nfs/software/labenv/defaults.cshrc .cshrc
Note: if you are still in San Francisco, the path is /raid3/software/labenv/defaults.cshrc
If you use bash or another shell, please see the Sysadmin.
* 2. Customize this file if you like.
* 3. Check out your own copy of dockenv, dock, sea, if you like.
By default you use the standard lab software.
* 4. Logout / login or source ~/.cshrc
* 5. You are now ready to use all the lab software, including docking.


* [[Lab Security Policy]]
* [[Disk space policy]]
* [[Backups]] policy.
* [[Portal system]] for off-site ssh cluster access.
* Get a [[Cluster 0 account]] and get started


= Physical machine summary =  
= Physical machine summary =  
256 Intel Xeon E5430 2.66Ghz cores (8core)
256 Intel Xeon E5430 2.66Ghz cores (8core)
118 Intel Xeon 3.0ghz cores        (dl140g1)
118 Intel Xeon 3.0ghz cores        (dl140g1)

Revision as of 14:44, 23 April 2014

This page is about our legacy cluster at Mission Bay. We also have a Cluster 1 at University of Toronto and a new cluster Cluster 2 at UCSF.

Priorities and Policies

Cluster 0 is a legacy cluster than will disappear. It predates many of the policies below. To this extent it is feasible, we intend to apply these policies retroactively to Cluster 0. However, the main idea is to get people off Cluster 0 asap and on to Cluster 2.

Physical machine summary

256 Intel Xeon E5430 2.66Ghz cores (8core) 118 Intel Xeon 3.0ghz cores (dl140g1) 106 Intel Xeon 2.8ghz cores (dl140g2) 38 Intel Xeon 2.4ghz cores (microway) 32 AMD Opteron 275 2.2ghz cores (dl145g2) 24 AMD 6164HE 1.7ghz cores (dl165g7)

  • Four node racks and Two server racks in BH-101
  • Two node racks and One server rack in N108
  • 36TB of RAID10 storage available to cluster, hosted among 8 dedicated servers
  • Each node has 1GB memory/cpu-core or better
  • 22 User workstations
  • 30 Support/Infrastructure servers (e.g. databases)
  • 6 Windows Laptops
  • 2 VMWARE servers hosting virtual desktops and servers that do not require dedicated hardware


About our cluster

Server rack 1

Server rack 2