Difference between revisions of "How to process results from a large-scale docking"

Jump to: navigation, search
Line 28: Line 28:
  foreach dir (`cat dirlist`)
  foreach dir (`cat dirlist`)
   echo $dir
   echo $dir
   rm $dir/OUTDOCK $dir/test.mol2.gz $dir/stderr
   rm -rf $dir/OUTDOCK $dir/test.mol2.gz $dir/stderr

Revision as of 15:25, 18 October 2017

Written by Jiankun Lyu, 20171018

This tutorial uses the DOCK3.7.1rc1.

Check for completion and resubmit failed jobs

In a large calculation such as this one with tens of thousands of separate processes, it is not unusual for some jobs to fail for some reason. Checking for successful completion, and re-starting failed processes, is a normal part of running such a large job.

Only do this if dirlist is the origenal and dirlist_ori does not exist

cp dirlist dirlist_ori

Run the script below in your docking directory

csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh /path/to/docking

For example:

csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh ./

This script puts all the directories of failed jobs in a file called dirlist_new.

mv dirlist dirlist_old
mv dirlist_new dirlist_new1
cp dirlist_new1 dirlist

Remove OUTDOCK, test.mol2.gz, and stderr from all of the directories that are incomplete.

foreach dir (`cat dirlist`)
  echo $dir
  rm -rf $dir/OUTDOCK $dir/test.mol2.gz $dir/stderr

This is to prevent issue during rerun and confusion over if docking was rerun.

Then resubmit them


Combine results

When docking is complete, merge the results of each separate docking job into a single sorted file.

cd path/to/docking
python $DOCKBASE/analysis/extract_all_blazing_fast.py dirlist extract_all.txt energy_cutoff

Cluster results