Selecting tranches in ZINC22

From DISI
Revision as of 22:19, 25 March 2021 by Frodo (talk | contribs) (asdf)
Jump to navigation Jump to search

Current method to select tranches to dock in ZINC 22 .

platforms

We support three platforms:

  • our cluster, .txt suffix
  • Wynton, .wyn suffix
  • AWS, .s3 suffix

If you are on another platform, suggest using one of these three.

arbitrary subsets

We have created subsets for you (see below, but you can select subsets at a very fine grain level:

  • by HAC by number of atoms
  • by calculated LogP within 0.1 units.
  • by charge -4 to +4.

Look in /nfs/exd/zinc-22/sets/

lead-like

  • if you want to dock lead-like, you want
/sets/H17-19.lead-like.<suffix> where <suffix> is txt, wyn or s3 as above
/sets/H20-23.lead-like.<suffix>

Then, depending on whether you want to include H24 and H25, you can optionally include

/sets/H24.lead-like.<suffix>
/sets/H25.lead-like.<suffix>

If you want to add on compounds from calculated LogP from 4.0 to 4.9

/sets/H17-19.greasy-leads.<suffix> and so on as above

fragment-like

  • if you want to dock fragments H10-H16, LogP<4
/sets/frag-like.<suffix>

greasy molecules

  • if you want to dock calculated logP > 5
/sets/H04-H19.greasy.<suffix>
/sets/H20-29.greasy.<suffix>
  • if you want to dock big monsters (not recommended, but who knows)
/sets/H26-H29.big.<suffix>
/sets/H26-H29.big-greasy.<suffix>

You can learn about how the sets were assembled using make-sets.csh make-sets.csh in turn uses files in /nfs/exd/jjiwork/dirs/ these in turn are created using make-files2.csh in /nfs/exd/jjiwork/

We can adjust these subsets based on what people want. It is easy, we just need to do it.


real life example

So, supposing I want to dock lead-like (H17-H25) in AWS using my S3 bucket results2021. I assemble a file database210325.txt from four files:

cat H17-19.lead-like.s3 H20-23.lead-like.s3 H24.lead-like.s3 H25.lead-like.s3 > database210325.txt

I upload this to s3

aws s3 cp database210325.txt s3://results2021/

Then I go on AWS and I follow the instructions in DOCK on AWS.

I monitor the job as it runs.

Then I harvest the results and download them to my computer.