Cluster 0: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
This page is about our cluster at [[Mission Bay]]. We also have a [[ | This page is about our legacy cluster at [[Mission Bay]]. We also have a [[Cluster 0]] at [[University of Toronto]] and a new cluster [[Cluster 2]] at [[UCSF]]. | ||
= Getting started on the cluster = | |||
* 1. request an account from Therese or John. | |||
* 2. Your home is on /raid1/people/<your_id>/. This area is backed up and is for important persistent files. | |||
* 3. You should run docking jobs and other intense calculations in ~/work/, which Therese will set up for you and is generally not your home directory. | |||
* 4. You should keep static data (e.g. crystallography data, results of published papers) in ~/store/ which is generally not your home directory. | |||
* 5. Lab guests get 100GB in each of these areas, and lab members get 500GB. Ask if you need more. | |||
* 6. If you go over your limit, you get emails for 2 weeks, then we impose a hard limit if you have not solved your overage. | |||
* 7. You can choose bash or tcsh to be your default shell. We don't care. Everything should work equally well with both. | |||
* 8. There is a special kind of static data, databases, for which you may request space. They will go in /nfs/db/<db_name>/. e.g. /nfs/db/zinc/ and /nfs/db/dude/ and /nfs/db/pdb and so on. | |||
* 9. Please run large docking jobs on /nfs/work and not on /nfs/store or /nfs/home. When you publish a paper, please delete what you can, compress the rest, and move it to /store/. Do not leave it on /work/ if you are no longer using it actively. | |||
* 10. Set up your account so that you can log in all across the cluster without a password. For instructions on how to securely generate ssh keys go here: http://wiki.uoft.bkslab.org/index.php/How_to_generate_ssh_keys_securely | |||
* 11. Software lives in /nfs/software/. All our machines are 64 bit Centos 6.3 unless otherwise indicated. | |||
* 12. Python 2.7 and 3.0 are installed. We currently recommend 2.7 because of library availability, but that may change soon. (Aug 2012) | |||
* 13. If you use tcsh, copy .login and .cshrc from ~jji/ ; If you use bash, copy .bash_profile from ~jji/ | |||
* 1. cp /nfs/software/labenv/defaults.cshrc .cshrc | |||
Note: if you are still in San Francisco, the path is /raid3/software/labenv/defaults.cshrc | |||
If you use bash or another shell, please see the Sysadmin. | |||
* 2. Customize this file if you like. | |||
* 3. Check out your own copy of dockenv, dock, sea, if you like. | |||
By default you use the standard lab software. | |||
* 4. Logout / login or source ~/.cshrc | |||
* 5. You are now ready to use all the lab software, including docking. | |||
= Physical machine summary = | = Physical machine summary = |
Revision as of 22:56, 27 February 2014
This page is about our legacy cluster at Mission Bay. We also have a Cluster 0 at University of Toronto and a new cluster Cluster 2 at UCSF.
Getting started on the cluster
- 1. request an account from Therese or John.
- 2. Your home is on /raid1/people/<your_id>/. This area is backed up and is for important persistent files.
- 3. You should run docking jobs and other intense calculations in ~/work/, which Therese will set up for you and is generally not your home directory.
- 4. You should keep static data (e.g. crystallography data, results of published papers) in ~/store/ which is generally not your home directory.
- 5. Lab guests get 100GB in each of these areas, and lab members get 500GB. Ask if you need more.
- 6. If you go over your limit, you get emails for 2 weeks, then we impose a hard limit if you have not solved your overage.
- 7. You can choose bash or tcsh to be your default shell. We don't care. Everything should work equally well with both.
- 8. There is a special kind of static data, databases, for which you may request space. They will go in /nfs/db/<db_name>/. e.g. /nfs/db/zinc/ and /nfs/db/dude/ and /nfs/db/pdb and so on.
- 9. Please run large docking jobs on /nfs/work and not on /nfs/store or /nfs/home. When you publish a paper, please delete what you can, compress the rest, and move it to /store/. Do not leave it on /work/ if you are no longer using it actively.
- 10. Set up your account so that you can log in all across the cluster without a password. For instructions on how to securely generate ssh keys go here: http://wiki.uoft.bkslab.org/index.php/How_to_generate_ssh_keys_securely
- 11. Software lives in /nfs/software/. All our machines are 64 bit Centos 6.3 unless otherwise indicated.
- 12. Python 2.7 and 3.0 are installed. We currently recommend 2.7 because of library availability, but that may change soon. (Aug 2012)
- 13. If you use tcsh, copy .login and .cshrc from ~jji/ ; If you use bash, copy .bash_profile from ~jji/
- 1. cp /nfs/software/labenv/defaults.cshrc .cshrc
Note: if you are still in San Francisco, the path is /raid3/software/labenv/defaults.cshrc If you use bash or another shell, please see the Sysadmin.
- 2. Customize this file if you like.
- 3. Check out your own copy of dockenv, dock, sea, if you like.
By default you use the standard lab software.
- 4. Logout / login or source ~/.cshrc
- 5. You are now ready to use all the lab software, including docking.
Physical machine summary
256 Intel Xeon E5430 2.66Ghz cores (8core) 118 Intel Xeon 3.0ghz cores (dl140g1) 106 Intel Xeon 2.8ghz cores (dl140g2) 38 Intel Xeon 2.4ghz cores (microway) 32 AMD Opteron 275 2.2ghz cores (dl145g2) 24 AMD 6164HE 1.7ghz cores (dl165g7)
- Four node racks and Two server racks in BH-101
- Two node racks and One server rack in N108
- 36TB of RAID10 storage available to cluster, hosted among 8 dedicated servers
- Each node has 1GB memory/cpu-core or better
- 22 User workstations
- 30 Support/Infrastructure servers (e.g. databases)
- 6 Windows Laptops
- 2 VMWARE servers hosting virtual desktops and servers that do not require dedicated hardware
OK, OK, the laptops and the workstations aren't technically part of the cluster.