Docking Competition: Difference between revisions
No edit summary |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 6: | Line 6: | ||
== Published accounts comparing docking methods == | == Published accounts comparing docking methods == | ||
There are many accounts in the literature comparing docking and virtual screening performance by available programs. Three common objections to many but not all of these studies are that | There are many accounts in the literature comparing docking and virtual screening performance by available programs. Three common objections to many but not all of these studies are that | ||
* a. the authors were not equally familiar with the programs, making for an unlevel playing field, and | |||
* b. the programs were somehow trained to reproduce results that were known in advance | |||
* c. the data sets were unrealistic, or tended to favor one program over another in a non general way. | |||
This is not an exhaustive list. | This is not an exhaustive list. | ||
Line 14: | Line 14: | ||
* J Med Chem 2006, the Warren paper and references cited therein | * J Med Chem 2006, the Warren paper and references cited therein | ||
* J Med Chem 2006, Huang, Shoichet, Irwin and references cited therein | * J Med Chem 2006, Huang, Shoichet, Irwin and references cited therein | ||
== Published accounts comparing virtual screening methods == | == Published accounts comparing virtual screening methods == | ||
Line 23: | Line 22: | ||
* competition before that. | * competition before that. | ||
* McMaster docking competition ( and the papers) | * McMaster docking competition ( and the papers) | ||
* SBS sponsored docking competition | |||
== Ongoing comptitions == | == Ongoing comptitions == | ||
* [http://sampl.eyesopen.com Statistical Assessment of the Modeling of Proteins and Ligands] run by OpenEye, now through March 2008. | * [http://sampl.eyesopen.com Statistical Assessment of the Modeling of Proteins and Ligands] run by OpenEye, now through March 2008. | ||
== Proposed competitions == | == Proposed competitions == | ||
We are proposing some kind of competition that would work to all our benefit. We leave it open here. I hope to write more soon. In the meantime, if you are interested in this, please write me, and let's see if we can set something up. | We are proposing some kind of competition that would work to all our benefit. We leave it open here. I hope to write more soon. In the meantime, if you are interested in this, please write me, and let's see if we can set something up. | ||
=== Virtual Screening 2007-1 === | === Virtual Screening 2007-1 === | ||
Line 45: | Line 37: | ||
[[Category:DUD]] | [[Category:DUD]] | ||
[[Category:Benchmarking]] | |||
[[Category:Docking]] |
Latest revision as of 20:23, 27 December 2018
The purpose of docking and virtual screening competitions is to evaluate methods, and to help the field of virtual screening advance. Virtual screening is a challenging and important problem. The current state of the art is that there are many programs, each with strengths and weaknesses. Although numerous published attempts to compare docking programs have appeared, there is still considerable scope for skepticism about way in which these evaluations were carried out.
About docking and virtual screening
Docking refers to the act of positioning a single molecule in binding site. Virtual screening refers to the process of docking many molecules into a binding site in order to rank them from best to worst. Note that although it is possible to evaluate docking (pose fidelity) and virtual screening (enrichment) separately, it really does not make any sense to have a program that can perform virtual screening without getting the pose correct, at least most of the time, because that implies that the program succeeds for spurious reasons.
Published accounts comparing docking methods
There are many accounts in the literature comparing docking and virtual screening performance by available programs. Three common objections to many but not all of these studies are that
- a. the authors were not equally familiar with the programs, making for an unlevel playing field, and
- b. the programs were somehow trained to reproduce results that were known in advance
- c. the data sets were unrealistic, or tended to favor one program over another in a non general way.
This is not an exhaustive list.
- J Med Chem 2006, the Warren paper and references cited therein
- J Med Chem 2006, Huang, Shoichet, Irwin and references cited therein
Published accounts comparing virtual screening methods
This is a stub.
Previous competitions
- competition before that.
- McMaster docking competition ( and the papers)
- SBS sponsored docking competition
Ongoing comptitions
- Statistical Assessment of the Modeling of Proteins and Ligands run by OpenEye, now through March 2008.
Proposed competitions
We are proposing some kind of competition that would work to all our benefit. We leave it open here. I hope to write more soon. In the meantime, if you are interested in this, please write me, and let's see if we can set something up.
Virtual Screening 2007-1
- Barry Hardy's blog, the eCheminfo Philadelphia Oct 2006 meeting, that may lead to some sort of friendly competition