How to process results from a large-scale docking: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
No edit summary
Line 20: Line 20:
  $DOCKBASE/docking/submit/submit.csh
  $DOCKBASE/docking/submit/submit.csh


== Combine Results ==
== Combine results ==


When docking is complete, merge the results of each separate docking job into a single sorted file.
When docking is complete, merge the results of each separate docking job into a single sorted file.

Revision as of 19:01, 18 October 2017

Written by Jiankun Lyu, 20171018

This tutorial uses the DOCK3.7.1rc1.

Check for completion and resubmit failed jobs

In a large calculation such as this one with tens of thousands of separate processes, it is not unusual for some jobs to fail for some reason. Checking for successful completion, and re-starting failed processes, is a normal part of running such a large job.

Run the script below in your docking directory

csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh

This script puts all the directories of failed jobs in a file called dirlist_new.

mv dirlist dirlist_ori
mv dirlist_new dirlist

Then resubmit them

$DOCKBASE/docking/submit/submit.csh

Combine results

When docking is complete, merge the results of each separate docking job into a single sorted file.

cd your/docking/directory
python $DOCKBASE/analysis/extract_all_blazing_fast.py dirlist extract_all.txt energy_cutoff

Cluster results