Gpus

From DISI
Revision as of 16:46, 8 June 2016 by Frodo (talk | contribs) (created)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

We have 7 GPUs on the cluster. (June 2016). There is a separate queue gpu.q to manage jobs

Here is a sample script to run amber:

/nfs/work/tbalius/MOR/run_amber/run.pmemd_cuda_wraper.csh

Here is an excerpt from script

##########
cat << EOF > qsub.amber.csh
#\$ -S /bin/csh
#\$ -cwd
#\$ -q gpu.q
#\$ -o stdout
#\$ -e stderr

# export CUDA_VISIBLE_DEVICES="0,1,2,3" 
# setenv CUDA_VISIBLE_DEVICES "0,1,2,3"
setenv AMBERHOME /nfs/soft/amber/amber14/ 
set amberexe = "/nfs/ge/bin/on-one-gpu - \$AMBERHOME/bin/pmemd.cuda"
##########

Note that we run the executable with on-one-gpu. This manages which gpus are used.

If you generate significant output, which is generally but not always true, it is important to write locally to scratch and then copy things over the network onto the disk. If you write large amounts of data directly to the NFS disk it can cause problems for others.