How to process results from a large-scale docking: Difference between revisions

From DISI
Jump to navigation Jump to search
(Created page with "Written by Jiankun Lyu, 20171018 == Check for completion and resubmit failed jobs == In a large calculation such as this one with tens of thousands of separate processes, it...")
 
No edit summary
Line 1: Line 1:
Written by Jiankun Lyu, 20171018
Written by Jiankun Lyu, 20171018
This tutorial uses the DOCK3.7.1rc1.


== Check for completion and resubmit failed jobs ==
== Check for completion and resubmit failed jobs ==
Line 9: Line 11:
  csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh
  csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh


This script puts all the directories of failed jobs in a file called dirlist_new. Before resubmiting those jobs, I
This script puts all the directories of failed jobs in a file called dirlist_new.
 
mv dirlist dirlist_ori
mv dirlist_new dirlist
 
Then resubmit them
 
$DOCKBASE/docking/submit/submit.csh
 
== Combine Results ==
 
python $DOCKBASE/analysis/extract_all_blazing_fast.py dirlist extract_all.txt 1000

Revision as of 18:49, 18 October 2017

Written by Jiankun Lyu, 20171018

This tutorial uses the DOCK3.7.1rc1.

Check for completion and resubmit failed jobs

In a large calculation such as this one with tens of thousands of separate processes, it is not unusual for some jobs to fail for some reason. Checking for successful completion, and re-starting failed processes, is a normal part of running such a large job.

Run the script below in your docking directory

csh /nfs/home/tbalius/zzz.github/DOCK/docking/submit/get_not_finished.csh

This script puts all the directories of failed jobs in a file called dirlist_new.

mv dirlist dirlist_ori
mv dirlist_new dirlist

Then resubmit them

$DOCKBASE/docking/submit/submit.csh

Combine Results

python $DOCKBASE/analysis/extract_all_blazing_fast.py dirlist extract_all.txt 1000