Cluster 3: Difference between revisions

From DISI
Jump to navigation Jump to search
(adsf)
 
(10 intermediate revisions by 3 users not shown)
Line 9: Line 9:


==Setting up a run==
==Setting up a run==
Scripts can be obtained from [http://shoichetlab.compbio.ucsf.edu/~kolb Peter] and are located in ~kolb/Scripts/Cluster.
Scripts from Peter Kolb are located in ~kolb/Scripts/Cluster.  This section is actually obsolete, and will be re-written in summer 2014.  It is left here in the spirit that it may be useful.  
* copy files with <tt>transfer.data2chef.sh</tt>
* copy files with <tt>transfer.data2chef.sh</tt>
* unzip them with <tt>receive.datafromrage.sh</tt>
* unzip them with <tt>receive.datafromrage.sh</tt>
Line 20: Line 20:
* additionally one can make use of the /scratch partition which is available on most of the nodes.
* additionally one can make use of the /scratch partition which is available on most of the nodes.
* ZINC is visible through at /bks/raid6, so you don't have to copy db files over.
* ZINC is visible through at /bks/raid6, so you don't have to copy db files over.
== Crossmounts and shared UID/GID space ==
The following disks are exported from [[Cluster 2]] and are visible on the QB3 shared cluster:
Read-only:
* /nfs/db as /bks/db
Read/write:
* /nfs/work as /bks/work
* /nfs/store as /bks/store
* /nfs/home as /bks/home
Please note that /nfs/soft and /nfs/scratch are NOT available on the QB3 shared cluster.
== Installed software ==
Our software is installed under user jji on the QB3 cluster.  You can copy .cshrc (.bashrc) from jji to get started.
== Queuing jobs ==
The QB3 cluster uses the same queuing system as [[Cluster 2]] and [[Cluster 0]]. However, the queuing systems are completely separate. To repeat, the disks are shared, but the jobs are managed completely separately.  This is for clarity, and we think it is both easy to use and logical.  If you believe it would work better another way, let us know!
[[Category:Tutorials]]
[[Category:Cluster]]

Latest revision as of 06:25, 3 June 2015

Information about the cluster (specs, queues, flags, ...) can be found in the QB3 cluster wiki. It is password-protected and will be accessible once you have obtained an account.

Obtaining an account

The individual steps are also described in this document.

  • obtain a general QB3 kerberos account from one of the QB3@UCSF WLAN account facilitators.
  • once you have such an account, mail the information (user name, name of PI) to Joshua Baker-LePain (jlb at salilab dot org), the cluster administrator.

Setting up a run

Scripts from Peter Kolb are located in ~kolb/Scripts/Cluster. This section is actually obsolete, and will be re-written in summer 2014. It is left here in the spirit that it may be useful.

  • copy files with transfer.data2chef.sh
  • unzip them with receive.datafromrage.sh
  • it is possible to use the standard docking scripts (after copying them over).
  • major difference: more than one queue. The queue is selected based on the CPU time requirements.
#$ -l mem_free=1G                  #-- submits on nodes with enough free memory
#$ -l  arch=lx24-amd64             #-- SGE resources (CPU type)
#$ -l panqb3=1G,scratch=1G         #-- SGE resources (home and scratch disks)
#$ -l h_rt=24:00:00                #-- runtime limit (see above; this requests 24 hours)
  • additionally one can make use of the /scratch partition which is available on most of the nodes.
  • ZINC is visible through at /bks/raid6, so you don't have to copy db files over.

Crossmounts and shared UID/GID space

The following disks are exported from Cluster 2 and are visible on the QB3 shared cluster:

Read-only:

  • /nfs/db as /bks/db

Read/write:

  • /nfs/work as /bks/work
  • /nfs/store as /bks/store
  • /nfs/home as /bks/home

Please note that /nfs/soft and /nfs/scratch are NOT available on the QB3 shared cluster.

Installed software

Our software is installed under user jji on the QB3 cluster. You can copy .cshrc (.bashrc) from jji to get started.

Queuing jobs

The QB3 cluster uses the same queuing system as Cluster 2 and Cluster 0. However, the queuing systems are completely separate. To repeat, the disks are shared, but the jobs are managed completely separately. This is for clarity, and we think it is both easy to use and logical. If you believe it would work better another way, let us know!