<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.docking.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ianscottknight</id>
	<title>DISI - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.docking.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ianscottknight"/>
	<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Special:Contributions/Ianscottknight"/>
	<updated>2026-04-08T14:29:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15825</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15825"/>
		<updated>2024-05-06T06:53:20Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* Subcommand: run */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;actives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;decoys.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;actives.tgz&#039;&#039; and &#039;&#039;actives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class (actives) contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class (decoys) contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the actives and a larger set of property-matched decoys for the decoys (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the actives and a set of antagonists can be used for the decoys.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the actives, then &#039;&#039;acitves.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf actives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as decoys, &#039;&#039;decoys.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf decoys.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_decoys_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing actives vs. decoys, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;actives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;decoys/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives, not for decoys, in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15824</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15824"/>
		<updated>2024-05-05T23:20:31Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;actives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;decoys.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;actives.tgz&#039;&#039; and &#039;&#039;actives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class (actives) contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class (decoys) contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the actives and a larger set of property-matched decoys for the decoys (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the actives and a set of antagonists can be used for the decoys.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the actives, then &#039;&#039;acitves.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf actives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as decoys, &#039;&#039;decoys.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf decoys.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing actives vs. decoys, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;actives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;decoys/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives, not for decoys, in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15823</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15823"/>
		<updated>2024-05-05T23:18:14Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* Subcommand: new */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;actives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;decoys.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;actives.tgz&#039;&#039; and &#039;&#039;actives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class (actives) contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class (decoys) contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the actives and a larger set of property-matched decoys for the decoys (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the actives and a set of antagonists can be used for the decoys.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the actives, then &#039;&#039;acitves.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf actives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as decoys, &#039;&#039;decoys.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf decoys.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15822</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15822"/>
		<updated>2024-05-05T23:11:22Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== Subcommand: &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15626</id>
		<title>Calculate ECFP4 using RDKit</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15626"/>
		<updated>2024-02-09T20:12:29Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1. Clone the ChemInfTools repository:&lt;br /&gt;
  git clone https://github.com/docking-org/ChemInfTools.git&lt;br /&gt;
&lt;br /&gt;
2. Ensure that you have sourced a python3 environment. E.g.,&lt;br /&gt;
  source /nfs/soft/ian/python3.8.5.sh &lt;br /&gt;
&lt;br /&gt;
3. Run the script:&lt;br /&gt;
  python ChemInfTools/utils/teb_chemaxon_cheminf_tools/generate_chemaxon_fingerprints_py3.py &amp;lt;SMILES_FILE&amp;gt; &amp;lt;OUTFILE_PREFIX&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15625</id>
		<title>Calculate ECFP4 using RDKit</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15625"/>
		<updated>2024-02-09T20:11:50Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1. git clone https://github.com/docking-org/ChemInfTools.git&lt;br /&gt;
&lt;br /&gt;
2. Ensure that you have sourced a python3 environment. E.g.,&lt;br /&gt;
  source /nfs/soft/ian/python3.8.5.sh &lt;br /&gt;
3. python ChemInfTools/utils/teb_chemaxon_cheminf_tools/generate_chemaxon_fingerprints_py3.py &amp;lt;SMILES_FILE&amp;gt; &amp;lt;OUTFILE_PREFIX&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15624</id>
		<title>Calculate ECFP4 using RDKit</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Calculate_ECFP4_using_RDKit&amp;diff=15624"/>
		<updated>2024-02-09T20:11:22Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: Created page with &amp;quot;1. git clone https://github.com/docking-org/ChemInfTools.git 2. Ensure that you have sourced a python3 environment E.g.,   source /nfs/soft/ian/python3.8.5.sh  3. python ChemInfTools/utils/teb_chemaxon_cheminf_tools/generate_chemaxon_fingerprints_py3.py &amp;lt;SMILES_FILE&amp;gt; &amp;lt;OUTFILE_PREFIX&amp;gt;&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;1. git clone https://github.com/docking-org/ChemInfTools.git&lt;br /&gt;
2. Ensure that you have sourced a python3 environment&lt;br /&gt;
E.g.,&lt;br /&gt;
  source /nfs/soft/ian/python3.8.5.sh &lt;br /&gt;
3. python ChemInfTools/utils/teb_chemaxon_cheminf_tools/generate_chemaxon_fingerprints_py3.py &amp;lt;SMILES_FILE&amp;gt; &amp;lt;OUTFILE_PREFIX&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15616</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15616"/>
		<updated>2024-02-01T19:54:56Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15612</id>
		<title>DOCK 3.8:How to build a release</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15612"/>
		<updated>2024-01-26T22:55:34Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the formal definition of a release of the [https://dock.compbio.ucsf.edu/ &#039;&#039;UCSF DOCK&#039;&#039;] software suite&lt;br /&gt;
&lt;br /&gt;
1. Build desired version of [[pydock3]] Python package (e.g., build pip-installable wheel file using [[Python Poetry|poetry]]).&lt;br /&gt;
&lt;br /&gt;
2. Clone desired version of [[dock3]] program.&lt;br /&gt;
&lt;br /&gt;
3. Incorporate the fruits of (1) and (2) into [[DOCK3.8]] repository housing an arsenal of scripts (e.g., 3D build scripts, post-processing scripts) that are essential for following the [https://www.nature.com/articles/s41596-021-00597-z published Nature protocol].&lt;br /&gt;
&lt;br /&gt;
A particular commit of the DOCK3.8 repository formally becomes a particular distribution of [https://dock.compbio.ucsf.edu/ &#039;&#039;UCSF DOCK&#039;&#039;] software when the commit is tagged with the distribution&#039;s [https://semver.org/ semantic version (e.g., DOCK3.8 v1.2.0)]. Note that &#039;&#039;pydock3&#039;&#039; and &#039;&#039;dock3&#039;&#039; are separately versioned from &#039;&#039;DOCK3.8&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15610</id>
		<title>DOCK 3.8:How to build a release</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15610"/>
		<updated>2024-01-26T08:47:06Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the formal definition of a release of the [https://dock.compbio.ucsf.edu/ &#039;&#039;UCSF DOCK&#039;&#039;] software suite&lt;br /&gt;
&lt;br /&gt;
1. Build desired version of [[pydock3]] Python package (e.g., build pip-installable wheel file using [[Python Poetry|poetry]]).&lt;br /&gt;
2. Clone desired version of [[dock3]] program.&lt;br /&gt;
3. Incorporate the fruits of (1) and (2) into [[DOCK3.8]] repository housing an arsenal of scripts (e.g., 3D build scripts, post-processing scripts) that are essential for following the [https://www.nature.com/articles/s41596-021-00597-z published Nature protocol].&lt;br /&gt;
&lt;br /&gt;
A particular commit of the DOCK3.8 repository formally becomes a particular distribution of [https://dock.compbio.ucsf.edu/ &#039;&#039;UCSF DOCK&#039;&#039;] software when the commit is tagged with the distribution&#039;s [https://semver.org/ semantic version (e.g., DOCK3.8 v1.2.0)]. Note that &#039;&#039;pydock3&#039;&#039; and &#039;&#039;dock3&#039;&#039; are separately versioned from &#039;&#039;DOCK3.8&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15609</id>
		<title>DOCK 3.8:How to build a release</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15609"/>
		<updated>2024-01-26T08:08:52Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes how to take the three github repos, combine them, and create a tarball for public use. &lt;br /&gt;
&lt;br /&gt;
1. Build desired version of [[pydock3]] Python package (e.g., build pip-installable wheel file using [[Python Poetry|poetry]]).&lt;br /&gt;
2. Clone desired version of [[dock3]] program.&lt;br /&gt;
3. Incorporate fruits of (1) and (2) into [[UCSF_DOCK]] repository housing legacy scripts that remain essential to established protocols (e.g., build scripts, post-processing scripts).&lt;br /&gt;
&lt;br /&gt;
A particular commit of the UCSF_DOCK repository becomes a particular distribution of &#039;&#039;UCSF DOCK&#039;&#039; software when the repository is assigned a tag of its [https://semver.org/ semantic version (e.g., v1.2.3)].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15608</id>
		<title>DOCK 3.8:How to build a release</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK_3.8:How_to_build_a_release&amp;diff=15608"/>
		<updated>2024-01-26T07:47:50Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes how to take the three github repos, combine them, and create a tarball for public use. &lt;br /&gt;
&lt;br /&gt;
1. Build desired version of [[pydock3]] Python package (e.g., build pip-installable wheel file using [[Python Poetry|poetry]]).&lt;br /&gt;
2. Clone desired version of [[dock3]] program.&lt;br /&gt;
3. Incorporate fruits of (1) and (2) into [[DOCKBASE]] repository housing legacy scripts that remain essential to established protocols (e.g., build scripts, post-processing scripts).&lt;br /&gt;
&lt;br /&gt;
The assignment of a semantic version to a particular commit of DOCKBASE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=SUBDOCK_DOCK3.8&amp;diff=15575</id>
		<title>SUBDOCK DOCK3.8</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=SUBDOCK_DOCK3.8&amp;diff=15575"/>
		<updated>2023-12-28T02:59:11Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* What&amp;#039;s New? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Important note- although DOCK 3.8 is in the header of this article, SUBDOCK is perfectly capable of running DOCK 3.7 workloads, though some features of DOCK 3.8 will not be taken advantage of.&lt;br /&gt;
&lt;br /&gt;
== Installing ==&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
git clone https://github.com/docking-org/SUBDOCK.git&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;IMPORTANT: subdock.bash expects to live in the same directory as rundock.bash!!!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
subdock.bash is located @ subdock.bash relative to the repository root.&lt;br /&gt;
&lt;br /&gt;
subdock.bash can be called directly from any location- it is not sensitive to the current working directory.&lt;br /&gt;
&lt;br /&gt;
== What&#039;s New? ==&lt;br /&gt;
&lt;br /&gt;
Compared to older scripts, SUBDOCK is easier to use, has more features, and is much more flexible!&lt;br /&gt;
&lt;br /&gt;
==== December 2022 ====&lt;br /&gt;
&lt;br /&gt;
* All jobs platforms (e.g slurm, sge) are supported on the same script&lt;br /&gt;
&lt;br /&gt;
* GNU Parallel is now supported as a jobs platform! Ideal for small-scale local testing. https://www.gnu.org/software/parallel/&lt;br /&gt;
&lt;br /&gt;
* Subdock can now be run on both db2.gz individual files &amp;amp; db2.tgz packages. A batch_size can be set for both types, allowing for more flexibility.&lt;br /&gt;
&lt;br /&gt;
* Arguments can be provided environmentally, e.g &amp;quot;export KEY=VALUE&amp;quot; or on the command line e.g &amp;quot;--key=value&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* Subdock now prints out a superscript to copy-paste on success, convenient for re-submission.&lt;br /&gt;
&lt;br /&gt;
* Fully restartable on all jobs platforms! See below section for an explanation on what this means, why it matters, and instructions on usage.&lt;br /&gt;
&lt;br /&gt;
* INDOCK version header is automatically corrected, as are any file paths referenced by INDOCK.&lt;br /&gt;
&lt;br /&gt;
==== May 2023 ====&lt;br /&gt;
&lt;br /&gt;
* You can provide http(s) URLs to dockable files as your input in lieu of file paths!&lt;br /&gt;
&lt;br /&gt;
* Charity engine is now supported as a jobs platform! More instructions for using this further down. (https://www.charityengine.com/)&lt;br /&gt;
&lt;br /&gt;
* Subdock will automatically detect if your jobs failed- no need to use an extra script to check if your jobs have actually finished or not&lt;br /&gt;
&lt;br /&gt;
==== May 2023 ====&lt;br /&gt;
&lt;br /&gt;
* A [[DOCK3R]] image is now used instead of a DOCK executable, in order to avoid pathological misbehavior witnessed on different systems.&lt;br /&gt;
&lt;br /&gt;
== Supported Platforms ==&lt;br /&gt;
&lt;br /&gt;
There are four platforms currently supported:&lt;br /&gt;
&lt;br /&gt;
# SLURM&lt;br /&gt;
# SGE (Sun Grid Engine)&lt;br /&gt;
#* &#039;&#039;&#039;note for BKS lab: the SGE queue on gimel does not have python3, your jobs will not work!&#039;&#039;&#039;&lt;br /&gt;
# GNU Parallel (for local runs- ideal for testing)&lt;br /&gt;
# Charity Engine&lt;br /&gt;
&lt;br /&gt;
One of these platforms must be specified- SLURM is the default. These platforms can be set by the&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
--use-slurm=true&lt;br /&gt;
--use-sge=true&lt;br /&gt;
--use-parallel=true&lt;br /&gt;
--use-charity=true&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Arguments, respectively&lt;br /&gt;
&lt;br /&gt;
==== Using Charity Engine ====&lt;br /&gt;
&lt;br /&gt;
To use charity engine, you must have access to an executable of the charity engine CLI, as well as GNU parallel.&lt;br /&gt;
&lt;br /&gt;
Additionally, you must provide your charity authentication details in the form of the CHARITY_AUTHKEY or --charity-authkey variable.&lt;br /&gt;
&lt;br /&gt;
WIP, more specific instructions to come.&lt;br /&gt;
&lt;br /&gt;
== Supported File Types ==&lt;br /&gt;
&lt;br /&gt;
DOCK can be run on individual db2.gz files or db2.tgz tar packages.&lt;br /&gt;
&lt;br /&gt;
The file type can be specified via the --use-db2=true or --use-db2-tgz=true arguments. db2.tgz is the default&lt;br /&gt;
&lt;br /&gt;
Each job dispatched by SUBDOCK will consume BATCH_SIZE files, where BATCH_SIZE is equal to --use-db2-batch-size or --use-db2-tgz-batch-size depending on which file type is chosen.&lt;br /&gt;
&lt;br /&gt;
The number of jobs dispatched by SUBDOCK is equal to ceil(N / BATCH_SIZE), where N is the total number of input files.&lt;br /&gt;
&lt;br /&gt;
== Restartability ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ONLY APPLICABLE FOR DOCK 3.8+!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Restartability means that we can impose arbitrary time limits on how long our jobs can run *without* losing our progress. Time limits can be as large or as small as we want them to be, even as little as a few minutes per job! This flexibility lets docking jobs efficiently fill in the gaps between longer-running jobs on the same ecosystem, thus they will be preferentially treated by whichever system is in charge of scheduling.&lt;br /&gt;
&lt;br /&gt;
=== How to use for your Job Platform ===&lt;br /&gt;
&lt;br /&gt;
On SLURM, runtime can be defined with the &amp;quot;--time&amp;quot; argument, e.g:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;subdock.bash --use-slurm=true --use-slurm-args=&amp;quot;--time=00:30:00&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow our job to run for 30 minutes before progress is saved &amp;amp; copied out.&lt;br /&gt;
&lt;br /&gt;
On GNU parallel this is accomplished with &amp;quot;--timeout&amp;quot;, e.g:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;subdock.bash --use-parallel=true --use-parallel-args=&amp;quot;--timeout 1800&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On SGE, the same can be achieved using the s_rt and h_rt parameters, e.g:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;subdock.bash --use-sge=true --use-sge-args=&amp;quot;-l s_rt=00:29:30 -l h_rt=00:30:00&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This tells SGE to warn the job 30 seconds prior to the 30 minute hard limit. &lt;br /&gt;
GNU and SLURM platforms will provide a hard-coded 30 seconds notice, whereas this notice period must be manually defined for SGE jobs.&lt;br /&gt;
&lt;br /&gt;
=== How to continue jobs ===&lt;br /&gt;
&lt;br /&gt;
Run subdock.bash again with the same parameters (particularly EXPORT_DEST, INPUT_SOURCE, USE_DB2, USE_DB2_TGZ, USE_DB2_BATCH_SIZE, and USE_DB2_TGZ_BATCH_SIZE) to restart your jobs! If you saved the superscript SUBDOCK spits out on successful submission, you can simply call that. &lt;br /&gt;
&lt;br /&gt;
You&#039;ll know there is no more work to be done if SUBDOCK prints &amp;quot;all N jobs complete!&amp;quot;, SUBDOCK will also tell you what proportion of jobs have not yet completed on each submission.&lt;br /&gt;
&lt;br /&gt;
Output files are appended with a suffix indicating how many times the docking task has been resubmitted, e.g OUTDOCK.0 for the first attempt, OUTDOCK.1 for the second, etc.&lt;br /&gt;
&lt;br /&gt;
Be careful not to overlap your submissions- there are no guardrails in place to prevent this from happening if you are not careful.&lt;br /&gt;
&lt;br /&gt;
== Full Example - All Steps ==&lt;br /&gt;
&lt;br /&gt;
This example assumes you have access to a DOCK executable and an installed scheduling system (SGE/SLURM/Parallel), but nothing else.&lt;br /&gt;
&lt;br /&gt;
1. Source subdock code from github&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
git clone https://github.com/docking-org/SUBDOCK.git&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Fetch dockfiles from DUDE-Z- we will use DRD4 for this example.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
# note- SUBDOCK automatically detects your DOCK version &amp;amp; corrects the INDOCK header accordingly&lt;br /&gt;
wget -r --reject=&amp;quot;index.html*&amp;quot; -nH --cut-dirs=2 -l1 --no-parent https://dudez.docking.org/DOCKING_GRIDS_AND_POSES/DRD4/dockfiles/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3a. Get db2 database subset sample via ZINC-22. Example provided below:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
wget http://files.docking.org/zinc22/zinc-22l/H17/H17P050/a/H17P050-N-laa.db2.tgz&lt;br /&gt;
wget http://files.docking.org/zinc22/zinc-22l/H17/H17P050/a/H17P050-N-lab.db2.tgz&lt;br /&gt;
wget http://files.docking.org/zinc22/zinc-22l/H17/H17P050/a/H17P050-N-lac.db2.tgz&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can select a db2 database subset via cartblanche22.docking.org- for wget-able files, choose the DOCK37 (*.db2.tgz) format, with URL download type. Multiple download types are supported, for example if you are on Wynton you can download Wynton file paths- removing the need to download the files yourself.&lt;br /&gt;
&lt;br /&gt;
3b. If you downloaded the db2.tgz files yourself, create an sdi.in file from your database subset, which will serve as a list of files to evaluate. For example:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
find $PWD -type f -name &#039;*.db2.tgz&#039; &amp;gt; sdi.in&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Export the parameters we just prepared as environment variables. &#039;&#039;&#039;You need a DOCK executable!&#039;&#039;&#039; This can be found via our download server if you have a license, otherwise lab members can directly pull https://github.com/docking-org/dock3.git. On BKS cluster, some curated executables have been prepared with labels @ /nfs/soft/dock/versions/dock38/executables. DOCK 3.7 executables may be found here as well!&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
export INPUT_SOURCE=$PWD/sdi.in&lt;br /&gt;
export EXPORT_DEST=$PWD/output&lt;br /&gt;
export DOCKFILES=$PWD/dockfiles&lt;br /&gt;
export DOCKEXEC=/nfs/soft/dock/versions/dock38/executables/dock38_nogist&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Choose a platform. You must select only one platform - mixing and matching is not supported.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
export USE_SLURM=true|...&lt;br /&gt;
export USE_SGE=true|...&lt;br /&gt;
export USE_PARALLEL=true|...&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any value other than exactly &amp;quot;true&amp;quot; will be interpreted as false.&lt;br /&gt;
&lt;br /&gt;
6a. Run docking!&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
bash ~/SUBDOCK/subdock.bash&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6b. You can also use command line arguments instead of environment export, if desired. These can be mixed and matched.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
export DOCKEXEC=$PWD/DOCK/ucsfdock/docking/DOCK/dock64&lt;br /&gt;
bash ~/SUBDOCK/subdock.bash --input-source=$PWD/sdi.in --export-dest=$PWD/output --dockfiles=$PWD/dockfiles --use-slurm=true&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. After executing subdock, it will print out a convenient &amp;quot;superscript&amp;quot; to copy &amp;amp; paste, for any future re-submissions.&lt;br /&gt;
&lt;br /&gt;
== Mixing DOCK 3.7 and DOCK 3.8 - known problems ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Headline: Though SUBDOCK is compatible with DOCK 3.7, and will allow docking of ligands built for 3.8 in 3.7, it is NOT RECOMMENDED to do this without using a specially prepared 3.7 executable!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you&#039;re running DOCK 3.8 against recently built ligands, you may encounter error messages that look like this:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;       1      2 bonds with error&lt;br /&gt;
Error. newlist is not big enough&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Or worse, like this:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt; Warning. tempconf = 0&lt;br /&gt;
         1597 -&amp;gt;            0 -&amp;gt;            0&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The latter error messages have the potential to cause some serious damage, as they are emitted very frequently &amp;amp; may consume excessive disk space. SUBDOCK will check for these messages periodically during DOCK&#039;s runtime &amp;amp; kill the process if they are found.&lt;br /&gt;
&lt;br /&gt;
If you are on 3.8 and are encountering these messages still, use the dock38_nogist executable described in [[How_to_install_DOCK_3.8#Prebuilt_Executable]]. This version voids the code related to the GIST scoring function, which is responsible for these errors.&lt;br /&gt;
&lt;br /&gt;
If you are using 3.7 still, it is possible to prepare a version that keeps everything the same, except without the dangerous &amp;quot;tempconf&amp;quot; message.&lt;br /&gt;
&lt;br /&gt;
== SUBDOCK help splash - all argument descriptions &amp;amp; defaults ==&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
[user@machine SUBDOCK]$ ./subdock.bash --help&lt;br /&gt;
SUBDOCK! Run docking workloads via job controller of your choice&lt;br /&gt;
=================required arguments=================&lt;br /&gt;
expected env arg: EXPORT_DEST, --export-dest&lt;br /&gt;
arg description: nfs output destination for OUTDOCK and test.mol2.gz files&lt;br /&gt;
&lt;br /&gt;
expected env arg: INPUT_SOURCE, --input-source&lt;br /&gt;
arg description: nfs directory containing one or more .db2.tgz files OR a file containing a list of db2.tgz files&lt;br /&gt;
&lt;br /&gt;
expected env arg: DOCKFILES, --dockfiles&lt;br /&gt;
arg description: nfs directory containing dock related files and INDOCK configuration for docking run&lt;br /&gt;
&lt;br /&gt;
expected env arg: DOCKEXEC, --dockexec&lt;br /&gt;
arg description: nfs path to dock executable&lt;br /&gt;
&lt;br /&gt;
=================job controller settings=================&lt;br /&gt;
optional env arg missing: USE_SLURM, --use-slurm&lt;br /&gt;
arg description: use slurm&lt;br /&gt;
defaulting to false&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_SLURM_ARGS, --use-slurm-args&lt;br /&gt;
arg description: addtl arguments for SLURM sbatch command&lt;br /&gt;
defaulting to &lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_SGE, --use-sge&lt;br /&gt;
arg description: use sge&lt;br /&gt;
defaulting to false&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_SGE_ARGS, --use-sge-args&lt;br /&gt;
arg description: addtl arguments for SGE qsub command&lt;br /&gt;
defaulting to &lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_PARALLEL, --use-parallel&lt;br /&gt;
arg description: use GNU parallel&lt;br /&gt;
defaulting to false&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_PARALLEL_ARGS, --use-parallel-args&lt;br /&gt;
arg description: addtl arguments for GNU parallel command&lt;br /&gt;
defaulting to &lt;br /&gt;
&lt;br /&gt;
=================input settings=================&lt;br /&gt;
optional env arg missing: USE_DB2_TGZ, --use-db2-tgz&lt;br /&gt;
arg description: dock db2.tgz tar files&lt;br /&gt;
defaulting to true&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_DB2_TGZ_BATCH_SIZE, --use-db2-tgz-batch-size&lt;br /&gt;
arg description: how many db2.tgz to evaluate per batch&lt;br /&gt;
defaulting to 1&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_DB2, --use-db2&lt;br /&gt;
arg description: dock db2.gz individual files&lt;br /&gt;
defaulting to false&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_DB2_BATCH_SIZE, --use-db2-batch-size&lt;br /&gt;
arg description: how many db2.gz to evaluate per batch&lt;br /&gt;
defaulting to 100&lt;br /&gt;
&lt;br /&gt;
=================addtl job configuration=================&lt;br /&gt;
optional env arg missing: MAX_PARALLEL, --max-parallel&lt;br /&gt;
arg description: max jobs allowed to run in parallel&lt;br /&gt;
defaulting to -1&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: SHRTCACHE, --shrtcache&lt;br /&gt;
arg description: temporary local storage for job files&lt;br /&gt;
defaulting to /scratch&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: LONGCACHE, --longcache&lt;br /&gt;
arg description: longer term storage for files shared between jobs&lt;br /&gt;
defaulting to /scratch&lt;br /&gt;
&lt;br /&gt;
=================miscellaneous=================&lt;br /&gt;
optional env arg missing: SUBMIT_WAIT_TIME, --submit-wait-time&lt;br /&gt;
arg description: how many seconds to wait before submitting&lt;br /&gt;
defaulting to 5&lt;br /&gt;
&lt;br /&gt;
optional env arg missing: USE_CACHED_SUBMIT_STATS, --use-cached-submit-stats&lt;br /&gt;
arg description: only check completion for jobs submitted in the latest iteration. Faster re-submission, but will ignore jobs that have been manually reset&lt;br /&gt;
defaulting to false&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK_3.8]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=15573</id>
		<title>Blastermaster (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=15573"/>
		<updated>2023-12-12T07:49:01Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;blastermaster&#039;&#039; allows the generation of a specific docking configuration for a given receptor and ligand.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; command:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your blastermaster job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;blastermaster_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - new &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;dockfiles&#039;&#039;: output files (DOCK parameter files &amp;amp; INDOCK)&lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the blastermaster job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, then copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of blastermaster.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run blastermaster:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 blastermaster - run&lt;br /&gt;
&lt;br /&gt;
This will execute the many blastermaster subroutines in sequence. The state of the program will be printed to standard output as it runs.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15504</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15504"/>
		<updated>2023-09-02T04:35:59Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within each sub-directory of &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=TLDR:decoygen&amp;diff=15503</id>
		<title>TLDR:decoygen</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=TLDR:decoygen&amp;diff=15503"/>
		<updated>2023-08-29T20:25:17Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Purpose ==&lt;br /&gt;
Create decoys for [[Retrospective Docking|retrospective docking]] by the method used to create the [[DUDE-Z Dataset|DUDE-Z dataset]]. See https://dudez.docking.org for sample data of known actives and their corresponding decoys for 43 targets.&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
* Ligands provided in smiles format (actives.smi)&lt;br /&gt;
* Current default parameters are shown in the following decoy_generation.in file:&lt;br /&gt;
See [[Generating_decoys_(Reed%27s_way)]] for an explanation of what these parameters mean and how DUDE-Z works.&lt;br /&gt;
&lt;br /&gt;
    PROTONATE YES&lt;br /&gt;
    MWT 0 125&lt;br /&gt;
    LOGP 0 3.6&lt;br /&gt;
    RB 0 5&lt;br /&gt;
    HBA 0 4&lt;br /&gt;
    HBD 0 3&lt;br /&gt;
    CHARGE 0 0&lt;br /&gt;
    LIGAND TC RANGE 0.0 0.35&lt;br /&gt;
    MINIMUM DECOYS PER LIGAND 20&lt;br /&gt;
    DECOYS PER LIGAND 50&lt;br /&gt;
    MAXIMUM TC BETWEEN DECOYS 0.8&lt;br /&gt;
    TANIMOTO YES&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If you wish to overwrite decoy_generation.in, please download it, edit and submit.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
* List of decoys&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=TLDR:dude-z&amp;diff=15502</id>
		<title>TLDR:dude-z</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=TLDR:dude-z&amp;diff=15502"/>
		<updated>2023-08-29T20:16:45Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: Ianscottknight moved page TLDR:dude-z to TLDR:decoygen: Naming the decoy generation pipeline `dude-z` doesn&amp;#039;t really make sense given that DUDE-Z is a dataset. I am guessing that the original logic behind this was that the pipeline was used to help create DUDE-Z. Still, the data types `pipeline` and `dataset` are distinct, therefore justifying this name change.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[TLDR:decoygen]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=TLDR:decoygen&amp;diff=15501</id>
		<title>TLDR:decoygen</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=TLDR:decoygen&amp;diff=15501"/>
		<updated>2023-08-29T20:16:45Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: Ianscottknight moved page TLDR:dude-z to TLDR:decoygen: Naming the decoy generation pipeline `dude-z` doesn&amp;#039;t really make sense given that DUDE-Z is a dataset. I am guessing that the original logic behind this was that the pipeline was used to help create DUDE-Z. Still, the data types `pipeline` and `dataset` are distinct, therefore justifying this name change.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Purpose ==&lt;br /&gt;
Assemble decoys for docking using the DUDEZ approach. See https://dudez.docking.org for sample data&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
* Ligands provided in smiles format (actives.smi)&lt;br /&gt;
* Current default parameters are shown in the following decoy_generation.in file:&lt;br /&gt;
See https://wiki.docking.org/index.php/Generating_decoys_(Reed%27s_way) for an explanation of what these parameters mean and how DUDE-Z works.&lt;br /&gt;
&lt;br /&gt;
    PROTONATE YES&lt;br /&gt;
    MWT 0 125&lt;br /&gt;
    LOGP 0 3.6&lt;br /&gt;
    RB 0 5&lt;br /&gt;
    HBA 0 4&lt;br /&gt;
    HBD 0 3&lt;br /&gt;
    CHARGE 0 0&lt;br /&gt;
    LIGAND TC RANGE 0.0 0.35&lt;br /&gt;
    MINIMUM DECOYS PER LIGAND 20&lt;br /&gt;
    DECOYS PER LIGAND 50&lt;br /&gt;
    MAXIMUM TC BETWEEN DECOYS 0.8&lt;br /&gt;
    TANIMOTO YES&lt;br /&gt;
&#039;&#039;&#039;If you wish to overwrite decoy_generation.in, please download it, edit and submit.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
* List of decoys&lt;br /&gt;
== Notes ==&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=15500</id>
		<title>DOCK3.8:Pydock3</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=15500"/>
		<updated>2023-08-24T01:51:04Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;pydock3&#039;&#039; is a Python package wrapping the [[DOCK|DOCK Fortran program]] that provides tools to help standardize and automate the computational methods employed in molecular docking. It is a natural successor to DOCK Blaster, originally published in 2009, and blastermaster.py, part of the [[DOCK 3.7]] release in 2012. &lt;br /&gt;
&lt;br /&gt;
[[File:Pydock3 logo.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
Scripts included in &#039;&#039;pydock3&#039;&#039;:&lt;br /&gt;
* &#039;&#039;dockopt&#039;&#039;: generate many different docking configurations, perform retrospective docking on them in parallel using a specified job scheduler (e.g. Slurm), and analyze the results. &lt;br /&gt;
* &#039;&#039;blastermaster&#039;&#039;: generate a specific docking configuration for a given receptor and ligand, intended for use by experts who wish to tune the docking model themselves.  This is a direct successor of blastermaster.py from DOCK 3.7.&lt;br /&gt;
&lt;br /&gt;
A [[docking configuration|&#039;&#039;&#039;docking configuration&#039;&#039;&#039;]] is a unique set of (1) DOCK parameter files (e.g., &#039;&#039;matching_spheres.sph&#039;&#039;), (2) an INDOCK file, and (3) a DOCK executable.&lt;br /&gt;
&lt;br /&gt;
= Installation =&lt;br /&gt;
&lt;br /&gt;
See: [[DOCK 3.8:How to install pydock3]].&lt;br /&gt;
&lt;br /&gt;
= Instructions =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;blastermaster&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[blastermaster (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;dockopt&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[dockopt (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;br /&gt;
[[Category:DOCK Blaster]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=15499</id>
		<title>DOCK3.8:Pydock3</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=15499"/>
		<updated>2023-08-24T01:46:12Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;pydock3&#039;&#039; is a Python package wrapping the [[DOCK|DOCK Fortran program]] that provides tools to help standardize and automate the computational methods employed in molecular docking. It is a natural successor to DOCK Blaster, originally published in 2009, and blastermaster.py, part of the [[DOCK 3.7]] release in 2012. &lt;br /&gt;
&lt;br /&gt;
[[Pydock3_logo.png]]&lt;br /&gt;
&lt;br /&gt;
Scripts included in &#039;&#039;pydock3&#039;&#039;:&lt;br /&gt;
* &#039;&#039;dockopt&#039;&#039;: generate many different docking configurations, perform retrospective docking on them in parallel using a specified job scheduler (e.g. Slurm), and analyze the results. &lt;br /&gt;
* &#039;&#039;blastermaster&#039;&#039;: generate a specific docking configuration for a given receptor and ligand, intended for use by experts who wish to tune the docking model themselves.  This is a direct successor of blastermaster.py from DOCK 3.7.&lt;br /&gt;
&lt;br /&gt;
A [[docking configuration|&#039;&#039;&#039;docking configuration&#039;&#039;&#039;]] is a unique set of (1) DOCK parameter files (e.g., &#039;&#039;matching_spheres.sph&#039;&#039;), (2) an INDOCK file, and (3) a DOCK executable.&lt;br /&gt;
&lt;br /&gt;
= Installation =&lt;br /&gt;
&lt;br /&gt;
See: [[DOCK 3.8:How to install pydock3]].&lt;br /&gt;
&lt;br /&gt;
= Instructions =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;blastermaster&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[blastermaster (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;dockopt&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[dockopt (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;br /&gt;
[[Category:DOCK Blaster]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=File:Pydock3_logo.png&amp;diff=15498</id>
		<title>File:Pydock3 logo.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=File:Pydock3_logo.png&amp;diff=15498"/>
		<updated>2023-08-24T01:44:22Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;pydock3 logo&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15467</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15467"/>
		<updated>2023-07-27T22:31:21Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optionally gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new [--job_dir_name=dockopt_job]&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=15430</id>
		<title>Blastermaster (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=15430"/>
		<updated>2023-06-09T18:52:21Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;blastermaster&#039;&#039; allows the generation of a specific docking configuration for a given receptor and ligand.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your blastermaster job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;blastermaster_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - new &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;dockfiles&#039;&#039;: output files (DOCK parameter files &amp;amp; INDOCK)&lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the blastermaster job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, then copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of blastermaster.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run blastermaster:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 blastermaster - run&lt;br /&gt;
&lt;br /&gt;
This will execute the many blastermaster subroutines in sequence. The state of the program will be printed to standard output as it runs.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15423</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15423"/>
		<updated>2023-06-05T19:15:07Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optional gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new [--job_dir_name=dockopt_job]&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and &#039;&#039;INDOCK&#039;&#039; for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;positives/&#039;&#039; (containing &#039;&#039;OUTDOCK&#039;&#039; and &#039;&#039;test.mol2&#039;&#039; files) and &#039;&#039;negatives/&#039;&#039; (containing just &#039;&#039;OUTDOCK&#039;&#039;)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* plot images (e.g., &#039;&#039;roc.png&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives, not for negatives, in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15422</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15422"/>
		<updated>2023-06-05T03:43:04Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files, which may be optional gzipped (with extension .db2.gz). Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new [--job_dir_name=dockopt_job]&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15421</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15421"/>
		<updated>2023-06-05T01:49:17Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of DB2 files. Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; containing only DB2 files to use as the positive class, then &#039;&#039;positives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; containing only DB2 files to use as the negative class, &#039;&#039;negatives.tgz&#039;&#039; should be created as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new [--job_dir_name=dockopt_job]&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15420</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15420"/>
		<updated>2023-06-05T01:45:13Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of .db2 files. Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; of active molecules to use as the positives for a DockOpt job, then the following would be how to create &#039;&#039;positives.tgz&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; of corresponding decoy molecules, one can create &#039;&#039;negatives.tgz&#039;&#039; as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new [--job_dir_name=dockopt_job]&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15419</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15419"/>
		<updated>2023-06-05T01:44:02Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of .db2 files. Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; of active molecules to use as the positives for a DockOpt job, then the following would be how to create &#039;&#039;positives.tgz&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; of corresponding decoy molecules, one can create &#039;&#039;negatives.tgz&#039;&#039; as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15418</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15418"/>
		<updated>2023-06-05T01:42:46Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; &lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of .db2 files. Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory &#039;&#039;actives/&#039;&#039; of active molecules to use as the positives for a DockOpt job, then the following would be how to create &#039;&#039;positives.tgz&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory &#039;&#039;decoys/&#039;&#039; of corresponding decoy molecules, one can create &#039;&#039;negatives.tgz&#039;&#039; as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15417</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15417"/>
		<updated>2023-06-05T01:39:26Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; &lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of .db2 files. Each tarball represents a binary class for the binary classification task that the docking model is trained on. The positive class contains the molecules that you want the docking model to preferentially assign &#039;&#039;favorable&#039;&#039; docking scores to. The negative class contains the molecules that you want the docking model to preferentially assign &#039;&#039;unfavorable&#039;&#039; scores to. The most common strategy is to use a set of known ligands as the positive class and a larger set of property-matched decoys for the negative class (a decoy-to-active ratio of 50:1 is standard), but other strategies are supported. For example, to create a docking model that preferentially assigns favorable scores to agonists over antagonists, a set of agonists can be used as the positive class and a set of antagonists can be used for the negative class.&lt;br /&gt;
&lt;br /&gt;
Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory of active molecules &#039;&#039;actives/&#039;&#039; to use as the positives for a DockOpt job, then the following would be how to create &#039;&#039;positives.tgz&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory of corresponding decoy molecules &#039;&#039;decoys/&#039;&#039;, one can create &#039;&#039;negatives.tgz&#039;&#039; as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15416</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15416"/>
		<updated>2023-06-05T01:27:51Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm). If you are a Shoichet Lab user, please see a special section for you, below.&lt;br /&gt;
&lt;br /&gt;
To use DOCK 3.8, you must first license it and install it.&lt;br /&gt;
[[DOCK 3.8:How to install pydock3]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;new&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Prepare rec.pdb, xtal-lig.pdb as described in Bender, 2021. https://pubmed.ncbi.nlm.nih.gov/34561691/&lt;br /&gt;
Or download pre-preared sample files from dudez2022.docking.org.&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; &lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;positives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;negatives.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;positives.tgz&#039;&#039; and &#039;&#039;negatives.tgz&#039;&#039;. Each of these is a tarball of a directory containing .db2 files. Therefore, you need to build the molecules yourself (see: https://tldr.docking.org/start/build3d38). Each tarball should contain only DB2 files. For example, if one has a directory of active molecules &#039;&#039;actives/&#039;&#039; to use as the positives for a DockOpt job, then the following would be how to create &#039;&#039;positives.tgz&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
 cd actives/&lt;br /&gt;
 tar -czf positives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
Similarly, for a directory of corresponding decoy molecules &#039;&#039;decoys/&#039;&#039;, one can create &#039;&#039;negatives.tgz&#039;&#039; as follows:&lt;br /&gt;
&lt;br /&gt;
 cd decoys/&lt;br /&gt;
 tar -czf negatives.tgz *.db2*&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - new --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (suffixed by a number, e.g. &amp;quot;box_1&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; or &#039;&#039;rec.crg.pdb&#039;&#039;.  Either is required, but not both.  If both are present, only &#039;&#039;rec.crg.pdb&#039;&#039; is used.&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on development nodes (not log nodes). Therefore, we recommend running on development nodes (see: https://wynton.ucsf.edu/hpc/get-started/development-prototyping.html). E.g.:&lt;br /&gt;
&lt;br /&gt;
 ssh dev1&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
If a log node must be used, then &#039;&#039;/wynton/scratch&#039;&#039; may be used:&lt;br /&gt;
&lt;br /&gt;
 ssh log1&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0] [--extra_submission_cmd_params_str=None] [--export_negatives_mol2=False]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence. Once this is done, the retrodock jobs for all created docking configurations are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;report.html&#039;&#039;: contains (1) a histogram of the performance of all tested docking configurations compared against a distribution of the performance of a random classifier, so as to show whether the test docking configurations are significantly better than ones that can be produced by a random classifier. This is necessary due to the fact that many configurations are being tested. Hence, a Bonferroni correction is applied to the significance threshold, dividing p=0.01 by the number of tested configurations. (2) ROC, charge, and energy plots of the top docking configurations, comparing positive class molecules vs. negative class molecules, (3) box plots of enrichment for every multi-valued config parameter, and (4) heatmaps of enrichment for every pair of multi-valued config parameters.&lt;br /&gt;
* &#039;&#039;results.csv&#039;&#039;: parameter values, criterion values, and other information about each docking configuration.&lt;br /&gt;
&lt;br /&gt;
In addition, some number of the best retrodock jobs will be copied to their own sub-directory &#039;&#039;best_retrodock_jobs/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within &#039;&#039;best_retrodock_jobs/&#039;&#039;, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for positives and &#039;&#039;2/&#039;&#039; for negatives (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for positives (&#039;&#039;output/1/&#039;&#039;), not for negatives (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable and declare default environmental variables:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/env.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/env.sh&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15030</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=15030"/>
		<updated>2022-12-06T04:30:43Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Be sure that you are in the directory containing the required input files: &lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039; &lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;actives.tgz&#039;&#039;&lt;br /&gt;
* &#039;&#039;decoys.tgz&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Note the inclusion of &#039;&#039;actives.tgz&#039;&#039; and &#039;&#039;decoys.tgz&#039;&#039;. Each of these is a tarball of a directory containing .db2 files. Therefore, you need to build the molecules yourself.&lt;br /&gt;
&lt;br /&gt;
To create the file structure for your dockopt job, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, use the &amp;quot;--job_dir_name&amp;quot; flag. E.g.:&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init --job_dir_name=dockopt_job_2&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on dev nodes (not log nodes). However, &#039;&#039;/wynton/scratch&#039;&#039; exists on both log nodes and dev nodes. Therefore, we recommend:&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14993</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14993"/>
		<updated>2022-11-22T03:05:39Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* Environmental variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init [--job_dir_name=&amp;quot;dockopt_job&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
=== TMPDIR ===&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
==== Note for UCSF researchers ====&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on dev nodes (not log nodes). However, &#039;&#039;/wynton/scratch&#039;&#039; exists on both log nodes and dev nodes. Therefore, we recommend:&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
=== job scheduler environmental variables ===&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
==== Slurm ====&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
==== SGE ====&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE the following should be correct:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
===== Note for UCSF researchers =====&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14992</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14992"/>
		<updated>2022-11-22T03:01:45Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* Environmental variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init [--job_dir_name=&amp;quot;dockopt_job&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
=== Note for UCSF researchers ===&lt;br /&gt;
&lt;br /&gt;
On the Wynton cluster, &#039;&#039;/scratch&#039;&#039; only exists on dev nodes (not log nodes). However, &#039;&#039;/wynton/scratch&#039;&#039; exists on both log nodes and dev nodes. Therefore, we recommend:&lt;br /&gt;
 export TMPDIR=/wynton/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=14985</id>
		<title>DOCK3.8:Pydock3</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=DOCK3.8:Pydock3&amp;diff=14985"/>
		<updated>2022-11-09T06:54:19Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;pydock3&#039;&#039; is a Python package wrapping the [[DOCK|DOCK Fortran program]] that provides tools to help standardize and automate the computational methods employed in molecular docking. It is a natural successor to DOCK Blaster, originally published in 2009, and blastermaster.py, part of the [[DOCK 3.7]] release in 2012. &lt;br /&gt;
&lt;br /&gt;
Scripts included in &#039;&#039;pydock3&#039;&#039;:&lt;br /&gt;
* &#039;&#039;dockopt&#039;&#039;: generate many different docking configurations, perform retrospective docking on them in parallel using a specified job scheduler (e.g. Slurm), and analyze the results. &lt;br /&gt;
* &#039;&#039;blastermaster&#039;&#039;: generate a specific docking configuration for a given receptor and ligand, intended for use by experts who wish to tune the docking model themselves.  This is a direct successor of blastermaster.py from DOCK 3.7.&lt;br /&gt;
&lt;br /&gt;
A [[docking configuration|&#039;&#039;&#039;docking configuration&#039;&#039;&#039;]] is a unique set of (1) DOCK parameter files (e.g., &#039;&#039;matching_spheres.sph&#039;&#039;), (2) an INDOCK file, and (3) a DOCK executable.&lt;br /&gt;
&lt;br /&gt;
= Installation =&lt;br /&gt;
&lt;br /&gt;
See: [[DOCK 3.8:How to install pydock3]].&lt;br /&gt;
&lt;br /&gt;
= Instructions =&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;blastermaster&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[blastermaster (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;dockopt&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See: [[dockopt (pydock3 script)]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:DOCK 3.8]]&lt;br /&gt;
[[Category:DOCK Blaster]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14948</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14948"/>
		<updated>2022-10-28T07:17:17Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init [--job_dir_name=&amp;quot;dockopt_job&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14935</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14935"/>
		<updated>2022-10-19T23:45:49Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TMPDIR=/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=14910</id>
		<title>Blastermaster (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Blastermaster_(pydock3_script)&amp;diff=14910"/>
		<updated>2022-10-11T23:40:43Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;blastermaster&#039;&#039; allows the generation of a specific docking configuration for a given receptor and ligand.&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your blastermaster job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;blastermaster_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 blastermaster - init &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;dockfiles&#039;&#039;: output files (DOCK parameter files &amp;amp; INDOCK)&lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the blastermaster job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, then copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of blastermaster.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run blastermaster:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 blastermaster - run&lt;br /&gt;
&lt;br /&gt;
This will execute the many blastermaster subroutines in sequence. The state of the program will be printed to standard output as it runs.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14909</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14909"/>
		<updated>2022-10-11T23:40:04Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== Note for UCSF Shoichet Lab members ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;pydock3&#039;&#039; is already installed on the following clusters. You can source the provided Python environment scripts to expose the &#039;&#039;pydock3&#039;&#039; executable:&lt;br /&gt;
&lt;br /&gt;
=== Wynton ===&lt;br /&gt;
&lt;br /&gt;
  source /wynton/group/bks/soft/python_envs/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
=== Gimel ===&lt;br /&gt;
&lt;br /&gt;
Only nodes other than &#039;&#039;gimel&#039;&#039; itself are supported, e.g., &#039;&#039;gimel5&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 ssh gimel5&lt;br /&gt;
 source /nfs/soft/ian/python3.8.5.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TEMP_STORAGE_PATH=/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14885</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14885"/>
		<updated>2022-10-07T16:55:25Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* gfortran instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14884</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14884"/>
		<updated>2022-10-07T16:55:11Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* pgf instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14883</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14883"/>
		<updated>2022-10-07T16:55:01Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* pgf instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;: in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14882</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14882"/>
		<updated>2022-10-07T16:54:32Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* pgf instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
**Note**: in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14881</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14881"/>
		<updated>2022-10-07T16:53:59Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* pgf instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14880</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14880"/>
		<updated>2022-10-07T16:53:37Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* gfortran instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) uncomment the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) comment out the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14879</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14879"/>
		<updated>2022-10-07T16:53:07Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
Note: in search.f and search_multi_cluster.f you need to (1) comment out the lines that are commented with &amp;quot;for gfortran&amp;quot; and (2) uncomment the lines that are commented with &amp;quot;for portland group&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14873</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14873"/>
		<updated>2022-10-06T00:11:27Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* pgf instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14872</id>
		<title>How to compile DOCK</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_compile_DOCK&amp;diff=14872"/>
		<updated>2022-10-06T00:11:03Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* gfortran instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== compiling dock3.8 ==&lt;br /&gt;
&lt;br /&gt;
Note- not every machine will have the prerequisite development libraries installed to compile DOCK fortran code. It is best to log into our &amp;quot;psi&amp;quot; machine, which is known to have the prerequisite libraries. Note: git will not work from psi, so you will need to have the code you would like to compile accessible via the NFS.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Wynton development nodes are also a good place to compile with gfortran. For PGF you will always need to log into our psi machine.&lt;br /&gt;
&lt;br /&gt;
=== gfortran instructions ===&lt;br /&gt;
&lt;br /&gt;
gfortran is a widely available compiler that is preinstalled on all lab machines. In order to compile with gfortran, start from the base directory of DOCK and follow these instructions:&lt;br /&gt;
&lt;br /&gt;
  cd dock3/src/libfgz&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=gfortran make&lt;br /&gt;
&lt;br /&gt;
You should now have a bunch of .o object files as well as a dock64 executable in the i386 directory. If you want to compile a debug build add DEBUG=1 before make, and if you want to compile for a different architecture prefix make with SIZE=[32|64]&lt;br /&gt;
&lt;br /&gt;
If you mistakenly compile something (wrong compiler or the like) you can clean up the files produced from compilation with &amp;quot;make clean&amp;quot;. Don&#039;t worry about the error messages you may get from this, something is a little fishy with our makefile.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
=== pgf instructions ===&lt;br /&gt;
&lt;br /&gt;
PGF or the Portland Group compiler is the premium option for fortran compilation. In order to compile with pgf you must log onto the psi machine, where the license and installation is located.&lt;br /&gt;
&lt;br /&gt;
  ssh $USER@psi&lt;br /&gt;
&lt;br /&gt;
Set up the compilation environment through the following script:&lt;br /&gt;
&lt;br /&gt;
  bash # if you are not already on bash&lt;br /&gt;
  export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH&lt;br /&gt;
  source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate&lt;br /&gt;
  source /nfs/soft/pgi/env.sh&lt;br /&gt;
&lt;br /&gt;
The instructions to compile are identical to those for gfortran, the only difference being you need to set COMPILER=pgf, e.g&lt;br /&gt;
&lt;br /&gt;
  cd ucsfdock/docking/DOCK/src/libfgz&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
  cd ../i386&lt;br /&gt;
  COMPILER=pgf make&lt;br /&gt;
&lt;br /&gt;
You can add DEBUG, SIZE etc. options just like gfortran.&lt;br /&gt;
&lt;br /&gt;
Sometimes there is an odd error with the licensing server that causes compilation to fail. I don&#039;t remember exactly I fixed this in the past, but I believe it had to do with restarting the license service via systemctl.&lt;br /&gt;
&lt;br /&gt;
It may be necessary to run make twice due to an odd error with the version.f file.&lt;br /&gt;
&lt;br /&gt;
== compiling dock3.7 ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  1. log into psi (portland fortran compiler is on psi)&lt;br /&gt;
  2. export PATH=/nfs/soft/pgi/current/linux86-64/12.10/bin:$PATH	&lt;br /&gt;
  3. source Trent&#039;s virual environment (source /nfs/home/tbalius/zzz.virtualenvs/virtualenv-1.9.1/myVEonGimel/bin/activate)&lt;br /&gt;
  4. source /nfs/soft/pgi/env.csh&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.7 ==&lt;br /&gt;
&lt;br /&gt;
Frist you need the source code:&lt;br /&gt;
* lab members and other developers may check out dock from [[Github]]&lt;br /&gt;
* others can request the distrabution here [http://dock.compbio.ucsf.edu/Online_Licensing/dock_license_application.html]&lt;br /&gt;
=== compiling dock === &lt;br /&gt;
&lt;br /&gt;
To compile DOCK go to path/DOCK/src/i386&lt;br /&gt;
 make&lt;br /&gt;
 make DEBUG=1&lt;br /&gt;
 make SIZE=32 [64]&lt;br /&gt;
 make clean&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
*To compile with portland group compilers&lt;br /&gt;
** go to a machine were pgf linsence is installed (sgehead1 or psi). &lt;br /&gt;
&lt;br /&gt;
*To add a file for compilation, add the xxxx.f file to path/DOCK/src/ and add xxxx.o to i386/Makefile object list.&lt;br /&gt;
=== debuging dock ===&lt;br /&gt;
Debugging :&lt;br /&gt;
&lt;br /&gt;
  pgdbg  path/DOCK/src/i386/dock_prof64&lt;br /&gt;
&lt;br /&gt;
=== profiling dock ===&lt;br /&gt;
Profiling:&lt;br /&gt;
 make pgprofile&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
 pgprof - opens a GUI profiler that interprets pgprof.out !&lt;br /&gt;
&lt;br /&gt;
you may also do the following:&lt;br /&gt;
 make dock_prof64&lt;br /&gt;
 cd /dir/test &lt;br /&gt;
 /dockdir/dock_prof64 INDOCK&lt;br /&gt;
 less pgprof.out&lt;br /&gt;
&lt;br /&gt;
 make gmon.out&lt;br /&gt;
 less gmon.out&lt;br /&gt;
&lt;br /&gt;
 make gprofile&lt;br /&gt;
 less gprofile.txt&lt;br /&gt;
&lt;br /&gt;
== DOCK 3.5.54 ==&lt;br /&gt;
This is for the Shoichet Lab local version of DOCK 3.5.54 trunk. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Checking out the source files&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 csh&lt;br /&gt;
 mkdir /where/to/put&lt;br /&gt;
 cd /where/to/put&lt;br /&gt;
 svn checkout file:///raid4/svn/dock&lt;br /&gt;
 svn checkout file:///raid4/svn/libfgz&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on our cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First, you need to set the path to the PGF compiler by adding this line to your .login file (at the end):&lt;br /&gt;
&lt;br /&gt;
 setenv DOCK_BASE ~xyz/dockenv&lt;br /&gt;
 echo DOCK_BASE set to $DOCK_BASE.&lt;br /&gt;
 source $DOCK_BASE/etc/login&lt;br /&gt;
&lt;br /&gt;
When you login to sgehead now, you should see the &amp;quot;Enabling pgf compiler&amp;quot; message&lt;br /&gt;
&lt;br /&gt;
Commands:&lt;br /&gt;
 ssh sgehead&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
Since we still have some 32bit computers, you&#039;ll also want to do&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
before leaving the libfgz branch and going to DOCK:&lt;br /&gt;
&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 make&lt;br /&gt;
&lt;br /&gt;
This makes the 64 bit version. Some options:&lt;br /&gt;
&lt;br /&gt;
 make SIZE=32&lt;br /&gt;
&lt;br /&gt;
Makes the 32bit version, useful for running on the cluster since some machines are older.&lt;br /&gt;
&lt;br /&gt;
 make DEBUG=1 &lt;br /&gt;
&lt;br /&gt;
Makes a debug version that will report line numbers of errors and is usable with pgdbg (the Portland Group Debugger), which is useful when writing code but is 10x (or more) slower.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Compiling the program on the shared QB3 cluster&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On one of the compilation nodes on the shared QB3 cluster (optint1 or optint2):&lt;br /&gt;
&lt;br /&gt;
 ssh optint2&lt;br /&gt;
 cd /where/to/put/libfgz/trunk&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile:&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  FC = ifort -O3&lt;br /&gt;
  CC = icc -O3&lt;br /&gt;
 make&lt;br /&gt;
 cd ../../dock/trunk/i386&lt;br /&gt;
 cp Makefile Makefile.old&lt;br /&gt;
 modify Makefile&lt;br /&gt;
  uncomment the following:&lt;br /&gt;
  F77 = ifort&lt;br /&gt;
  FFLAGS = -O3 -convert big_endian&lt;br /&gt;
 make dock&lt;br /&gt;
&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:DOCK]]&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14814</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14814"/>
		<updated>2022-09-15T19:51:30Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* SGE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TEMP_STORAGE_PATH=/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters using SGE, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14813</id>
		<title>Dockopt (pydock3 script)</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Dockopt_(pydock3_script)&amp;diff=14813"/>
		<updated>2022-09-15T19:50:21Z</updated>

		<summary type="html">&lt;p&gt;Ianscottknight: /* run */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;dockopt&#039;&#039; allows the generation of many different docking configurations which are then evaluated &amp;amp; analyzed in parallel using a specified job scheduler (e.g. Slurm).&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;init&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
First you need to create the file structure for your dockopt job. To do so, simply type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init&lt;br /&gt;
&lt;br /&gt;
By default, the job directory is named &#039;&#039;dockopt_job&#039;&#039;. To specify a different name, type&lt;br /&gt;
&lt;br /&gt;
 pydock3 dockopt - init &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The job directory contains two sub-directories: &lt;br /&gt;
# &#039;&#039;working&#039;&#039;: input files, intermediate blaster files, sub-directories for individual blastermaster subroutines&lt;br /&gt;
# &#039;&#039;retrodock_jobs&#039;&#039;: individual retrodock jobs for each docking configuration&lt;br /&gt;
&lt;br /&gt;
The key difference between the working directories of &#039;&#039;blastermaster&#039;&#039; and &#039;&#039;dockopt&#039;&#039; is that the working directory of &#039;&#039;dockopt&#039;&#039; may contain multiple variants of the blaster files (prefixed by a number, e.g. &amp;quot;1_box&amp;quot;). These variant files are used to create the different docking configurations specified by the multi-valued entries of &#039;&#039;dockopt_config.yaml&#039;&#039;. They are created efficiently, such that the same variant used in multiple docking configurations is not created more than once. &lt;br /&gt;
&lt;br /&gt;
If your current working directory contains any of the following files, then they will be automatically copied into the working directory within the created job directory. This feature is intended to simplify the process of configuring the dockopt job.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;rec.crg.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;reduce_wwPDB_het_dict.txt&#039;&#039;&lt;br /&gt;
* &#039;&#039;filt.params&#039;&#039;&lt;br /&gt;
* &#039;&#039;radii&#039;&#039;&lt;br /&gt;
* &#039;&#039;amb.crg.oxt&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.siz&#039;&#039;&lt;br /&gt;
* &#039;&#039;delphi.def&#039;&#039;&lt;br /&gt;
* &#039;&#039;vdw.parms.amb.mindock&#039;&#039;&lt;br /&gt;
* &#039;&#039;prot.table.ambcrg.ambH&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Only the following are required. Default versions / generated versions of the others will be used instead if they are not detected.&lt;br /&gt;
* &#039;&#039;rec.pdb&#039;&#039;&lt;br /&gt;
* &#039;&#039;xtal-lig.pdb&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
If you would like to use files not present in your current working directory, copy them into your job&#039;s working directory, e.g.:&lt;br /&gt;
 cp &amp;lt;FILE_PATH&amp;gt; &amp;lt;JOB_DIR_NAME&amp;gt;/working/&lt;br /&gt;
&lt;br /&gt;
Finally, configure the &#039;&#039;dockopt_config.yaml&#039;&#039; file in the job directory to your specifications. The parameters in this file govern the behavior of dockopt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; The &#039;&#039;dockopt_config.yaml&#039;&#039; file differs from the &#039;&#039;blastermaster_config.yaml&#039;&#039; file in that every parameter of the former may accept either a single value or a &#039;&#039;list of comma-separated values&#039;&#039;, which indicates a pool of values to attempt for that parameter. Multiple such multi-valued parameters may be provided, and all unique resultant docking configurations will be attempted. &lt;br /&gt;
&lt;br /&gt;
Single-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: 1.0&lt;br /&gt;
&lt;br /&gt;
Multi-valued YAML line format:&lt;br /&gt;
&lt;br /&gt;
 distance_to_surface: [1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]&lt;br /&gt;
&lt;br /&gt;
== Environmental variables ==&lt;br /&gt;
&lt;br /&gt;
Designate where temporary job files should be placed. E.g.:&lt;br /&gt;
&lt;br /&gt;
 export TEMP_STORAGE_PATH=/scratch&lt;br /&gt;
&lt;br /&gt;
In order for &#039;&#039;dockopt&#039;&#039; to know which scheduler it should use, please configure the following environmental variables according to which one of the job schedulers you have.&lt;br /&gt;
&lt;br /&gt;
=== Slurm ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Shoichet Lab Gimel cluster (on any node other than &#039;gimel&#039; itself, such as &#039;gimel5&#039;):&lt;br /&gt;
&lt;br /&gt;
 export SBATCH_EXEC=/usr/bin/sbatch&lt;br /&gt;
 export SQUEUE_EXEC=/usr/bin/squeue&lt;br /&gt;
&lt;br /&gt;
=== SGE ===&lt;br /&gt;
&lt;br /&gt;
E.g., on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export QSTAT_EXEC=/opt/sge/bin/lx-amd64/qstat&lt;br /&gt;
 export QSUB_EXEC=/opt/sge/bin/lx-amd64/qsub&lt;br /&gt;
&lt;br /&gt;
The following is necessary on the UCSF Wynton cluster:&lt;br /&gt;
&lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/wynton/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
On most clusters, this will probably be:&lt;br /&gt;
 &lt;br /&gt;
 export SGE_SETTINGS=/opt/sge/default/common/settings.sh&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;run&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Once your job has been configured to your liking, navigate to the the job directory and run &#039;&#039;dockopt&#039;&#039;:&lt;br /&gt;
 cd &amp;lt;JOB_DIR_NAME&amp;gt;&lt;br /&gt;
 pydock3 dockopt - run &amp;lt;JOB_SCHEDULER_NAME&amp;gt; [--retrodock_job_timeout_minutes=None] [--retrodock_job_max_reattempts=0]&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;JOB_SCHEDULER_NAME&amp;gt; is one of:&lt;br /&gt;
* &#039;&#039;sge&#039;&#039;&lt;br /&gt;
* &#039;&#039;slurm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This will execute the many dockopt subroutines in sequence, except for the retrodock jobs run on each docking configuration, which are run in parallel via the scheduler. The state of the program will be printed to standard output as it runs.&lt;br /&gt;
&lt;br /&gt;
Once the dockopt job is complete, the following files will be generated in the job directory:&lt;br /&gt;
* &#039;&#039;dockopt_job_report.pdf&#039;&#039;: contains (1) roc.png of best retrodock job, (2) box plots of enrichment for every multi-valued config parameter, and (3) heatmaps of enrichment for every pair of multi-valued config parameters&lt;br /&gt;
* &#039;&#039;dockopt_job_results.csv&#039;&#039;: enrichment metrics for each docking configuration&lt;br /&gt;
&lt;br /&gt;
In addition, the best retrodock job will be copied to its own sub-directory &#039;&#039;best_retrodock_job/&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
Within best_retrodock_job, there are the following files and sub-directories:&lt;br /&gt;
* &#039;&#039;dockfiles/&#039;&#039;: parameters files and INDOCK for given docking configuration&lt;br /&gt;
* &#039;&#039;output/&#039;&#039;: contains: &lt;br /&gt;
** joblist&lt;br /&gt;
** sub-directories &#039;&#039;1/&#039;&#039; for actives and &#039;&#039;2/&#039;&#039; for decoys (the former containing OUTDOCK and test.mol2 files, the latter containing just OUTDOCK)&lt;br /&gt;
** log files for the retrodock jobs&lt;br /&gt;
* &#039;&#039;roc.png&#039;&#039;: the ROC enrichment curve (log-scaled x-axis) for given docking configuration&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; by default, a mol2 file is exported only for actives (&#039;&#039;output/1/&#039;&#039;), not for decoys (&#039;&#039;output/2/&#039;&#039;), in order to prevent disk space issues.&lt;/div&gt;</summary>
		<author><name>Ianscottknight</name></author>
	</entry>
</feed>