Difference between revisions of "Arthor Documentation for Future Developer"

From DISI
Jump to navigation Jump to search
 
(46 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021
+
== Introduction ==
 +
[https://www.nextmovesoftware.com/downloads/arthor/documentation/Arthor.pdf Here is the link to Arthor's manual]
 +
* Username: ucsf@nextmovesoftware.com
 +
* Password: <Ask jjiteam@googlegroups.com>
  
==Install and Set Up on TomCat (Method 1)==
+
Arthor configurations and the frontend files are consolidated in '''/nfs/soft2/arthor_configs/'''.
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command
 
    cat /etc/centos-release
 
  
Check your current version of Java with the following command:
+
'''/nfs/soft2/arthor_configs/start_arthor_script.sh''' can start/restart Arthor instances on respective machines.
    java -version
 
  
On n-1-136 we are running openjdk version "1.8.0_222", OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
+
Launch the script to see the options available.
If Java is not installed, install it using yum
 
  
==See this wiki page for more detailed information about installing Tomcat on our cluster==
+
== How To Download Arthor ==
http://wiki.docking.org/index.php/Tomcat_Installation
+
# Ssh to nfs-soft2 and become root. Prepare directory
 +
#: <source> mkdir /export/soft2/arthor_configs/arthor-<version> && cd /export/soft2/arthor_configs/arthor-<version> </source>
 +
# [https://www.nextmovesoftware.com/downloads/arthor/ Download Software with this link]
 +
#* Username: ucsf@nextmovesoftware.com
 +
#* Password: <Ask jjiteam@googlegroups.com>
 +
# Go to releases. Look for ''' smallworld-java-<version>.tar.gz ''' and copy the link address.
 +
# Download using wget
 +
#: <source> wget --user ucsf@nextmovesoftware.com --password <Ask jjiteam@googlegroups.com> <link address> </source>
 +
# Decompress the file
 +
#* <source> tar -xvf <file_name> </source>
  
==Open port for Arthor==
+
== How To Launch Arthor For The First Time ==
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.
+
=== Prepare Files and Directories ===
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/
+
# Ssh to nfs-exc and become root
 +
# Open a port in the firewall
 +
#: <source>firewall-cmd --permanent --add-port=<port_number>/tcp
 +
firewall-cmd --reload
 +
</source>
 +
# Go to Arthor Config directory
 +
#: <source>cd /export/soft2/arthor_configs/arthor-<latest_version></source>
 +
# Create an Arthor config file
 +
#: <source>vim <name_of_file>.cfg</source>
 +
#* Add these lines in the file. Check the manual for more options.
 +
#: <source>
 +
DataDir=/local2/public_arthor
 +
MaxConcurrentSearches=6
 +
MaxThreadsPerSearch=8
 +
AutomaticIndex=false
 +
AsyncHitCountMax=20000
 +
Depiction=./depict/bot/svg?w=%w&h=%h&svgunits=px&smi=%s&zoom=0.8&sma=%m&smalim=1
 +
Resolver=https://sw.docking.org/util/smi2mol?smi=%s
 +
</source>
 +
=== Start Arthor Instance ===
 +
# Now ssh into a machine you wish to run an Arthor instance on and become root
 +
# Change your shell to bash if you havn't already
 +
#: <source>bash</source>
 +
# Create a screen
 +
#: <source>screen -S <screen_name></source>
 +
# Prepare Arthor Config Path
 +
#: <source>export ARTHOR_CONFIG="/nfs/soft2/arthor_configs/arthor-<version>/<name_of_config_file>.cfg"</source>
 +
# Launch java
 +
#: <source>java -jar /nfs/soft2/arthor_configs/arthor-<version>/arthor-<version>-centos7/java/arthor.jar --httpPort=<port_number></source>
  
===Step 1: Check Port Status===
+
=== Configuration Details ===
Check that the port is not open and that Apache is not showing that port.
 
    netstat -na | grep <port number you are checking>
 
  
    lsof -i -P |grep http
+
*'''DataDir''': This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.
  
===Step 2: Check Port Status in IP Tables===
+
*'''MaxConcurrentSearches''': Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.
    iptables-save | grep <port number you are checking>
 
  
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn't want to edit it and break something.
+
*'''MaxThreadsPerSearch''': The number of threads to use for both ATDB and ATFP searches
  
===Step 4: Open Firewall Ports===
+
*Set '''AutomaticIndex''' to false if you don't want new smiles files added to the data directory to be indexed automatically
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.
 
Run as root.
 
    firewall-cmd --add-port=<port number you are adding>/tcp --permanent
 
  
You need to reload the firewall after a change is made.
+
*'''AsyncHitCountMax''': The upper-bound for the number of hits to retrieve in background searches.
    firewall-cmd --reload
 
  
===Step 5: Check that port is working===
+
*'''Resolver''': Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.
To check that the port is active, run.
 
    iptables -nL
 
  
You should see something along the lines of:
+
'''Check Arthor manual for more configuration options'''
    ACCEPT    tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:<port number you're adding> ctstate NEW,UNTRACKED
 
  
==How to run standalone Arthor instance==
+
== How to Build Arthor Databases==
 +
We can build Arthor Databases anywhere. Consolidate smiles into one directory so you can index them all one by one.
  
===Step 1: Use or start a bash shell===
+
Just use the script located at '''/nfs/home/jjg/scripts/arthor_index_script.sh''' at the directory where you c
You can check your default shell using
 
    echo $SHELL
 
  
If your default shell is csh, use
+
Here is the content of the script:
    bash
+
<source>
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.
+
#!/bin/bash
  
===Step 2: Set your environment variables===
+
version="3.4.2"
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7
 
    export PATH=$ARTHOR_DIR/bin/:$PATH
 
  
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.
+
export ARTHOR_DIR=/nfs/soft2/arthor_configs/arthor-$version/arthor-$version-centos7/
The PATH environment variable is needed if you wish to use the Arthor tools from the command line
+
export PATH=$ARTHOR_DIR/bin/:$PATH
  
===Step 3: Run the arthor-server.jar===
+
target="*.smi"
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort <your httpPort>
 
  
==Setting environment variables for TomCat Server==
+
for j in $target
Set the environment variables in the setenv.sh file. Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat. As of December 2019, we are running 9.0.27 on n-1-136.
+
do
 +
        echo 'smi2atdb -j 4 -p '$j' '${j}'.atdb'
 +
        smi2atdb -j 4 -p $j ${j}.atdb
 +
        echo 'atdb2fp -j 4 '$j'.atdb'
 +
atdb2fp -j 4 ${j}.atdb
 +
done
 +
</source>
  
  vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh
+
=== Command Details ===
 
+
'''smi2atdb''' creates the atdb files needed for Substructure searching.
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file
+
*'''-j''' is the amount of threads to use to index the smiles file
  export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg
+
*'''-p''' stores the position of the original file  
 
+
'''atdb2fp''' makes substructure searching faster
Here is an example of the arthor.cfg file:
 
  # Arthor generated config file
 
  BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin
 
  DATADIR=/usr/local/tomcat/arthor_data
 
  STAGEDIR=/usr/local/arthor_data/stage
 
  NTHREADS=64 .
 
  NODEAFFINITY=true
 
  SearchAsYouDraw=true
 
  AutomaticIndex=true
 
  DEPICTION=./depict/bot/svg?w=%w&h=%h&svgunits=px&smi=%s&zoom=0.8&sma=%m&smalim=1
 
  RESOLVER=
 
 
 
'''Important parts of the arthor.cfg file'''
 
 
 
'''BINDIR''' is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search.  
 
 
 
'''DATADIR''' This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.
 
 
 
'''STAGEDIR''' Location where the index files will be built before being moved into the DATADIR.
 
 
 
'''NTHREADS''' The number of threads to use for both ATDB and ATFP searches
 
 
 
Set '''AutomaticIndex''' to false if you don't want new smiles files added to the data directory to be indexed automatically
 
 
 
==Background==
 
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!
 
 
 
==Checking Disk Space Usage==
 
Before building arthor indexes, it's always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:
 
 
 
  df -h /<directory with disc>
 
 
 
==Building Large Databases==
 
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches > 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches > 500M molecules.
 
 
 
Feel free to modify it if you think a better method exists.
 
 
 
  import subprocess
 
  import sys
 
  import os                                                                                                                                                                         
 
 
 
  from os import listdir
 
  from os.path import isfile, join
 
 
 
  mypath = "<Path to directory holding .smi files>"
 
  onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
 
  onlyfiles.sort()
 
 
 
  create_fp = True
 
  cur_mols = 0
 
  lower_bound = 500000000
 
  upper_bound = 600000000
 
  files_to_merge = []
 
 
 
  def merge_files(f_t_m):
 
      arr = f_t_m[0].split(".")
 
      arr2 = f_t_m[len(f_t_m) - 1].split(".")
 
      file_name_merge = (arr[0] + "_" + arr2[0] + ".smi")
 
      print ("File being created: " + file_name_merge)
 
 
 
      for file in f_t_m:
 
        tmp = file.split(".")
 
        process = subprocess.Popen("cat " + join(mypath, file) + " >> " + file_name_merge, shell=True)
 
        process.wait()
 
 
 
  for file in onlyfiles:
 
      arr = file.split(".")
 
 
 
      if (arr[len(arr) - 1] == "smi"):
 
        print("Working with " + file)
 
        mol = sum(1 for line in open(join(mypath, file)))
 
        print(file, mol, cur_mols)
 
 
 
        if (cur_mols + mol > lower_bound):
 
            if (cur_mols + mol < upper_bound):
 
              files_to_merge.append(file)
 
              merge_files(files_to_merge)
 
              cur_mols = 0
 
              files_to_merge.clear()
 
            else:
 
              merge_files(files_to_merge)
 
              files_to_merge.clear()
 
              files_to_merge.append(file)
 
              merge_files(files_to_merge)
 
              cur_mols = 0
 
              files_to_merge.clear()
 
        else:
 
            cur_mols += mol
 
            files_to_merge.append(file)
 
 
 
  if (len(files_to_merge) != 0):
 
      merge_files(files_to_merge)
 
 
 
==Building Arthor Indexes==
 
Once you've merged the .smi files together, it's time to start building the databases themselves. To do this we use the command
 
 
 
  smi2atdb -j 0 -p <The .smi file> <The .atdb>
 
 
 
The flag "-j 0" enables parallel generation and utilizes all available processors to generate the .atdb file. The "-p" flag stores the offset position in the ATDB file. Since we're building indexes for the Web Application, you must use the "-p" flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.
 
 
 
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable "create_fp" to false if you don't want to create .atdb.fp files (refer to page 9 in the Arthor documentation).
 
 
 
  import subprocess
 
  import sys
 
  import os
 
 
 
  from os import listdir
 
  from os.path import isfile, join
 
 
 
  mypath = "<Path containing the .smi files"
 
  onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
 
 
 
  create_fp = True
 
 
 
  for file in onlyfiles:
 
      arr = file.split(".")
 
 
 
      if (arr[len(arr) - 1] == "smi"):
 
        process = subprocess.Popen("/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb".format(join(mypath, file), arr[0]), shell=True)
 
        process.wait()
 
 
 
        print("SUCCESS! {0}.atdb file was created!".format(arr[0]))
 
 
 
        if (create_fp):
 
            process = subprocess.Popen("/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb".format(arr[0]), shell=True)
 
            process.wait()
 
     
 
            print("SUCCESS! {0}.atdb.fp file was created!".format(arr[0]))
 
 
 
==Uploading Indexes to the Web Application==
 
One can upload indexes to the Web Application by changing the "DATADIR" variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.
 
 
 
==Further Arthor Optimizations==
 
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.
 
 
 
 
 
'''NodeAffinity NUMA''': optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))
 
 
 
 
 
'''AsyncHitCountAllowed=true|false''' After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed.
 
 
 
 
 
'''AsyncHitCountMax=#''' The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything.
 
 
 
 
 
'''MaxConcurrentSearches=#''' Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time.

 
 
 
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open. 
Default: 6
 
 
 
 
 
'''Binary Fingerprint Folding''' Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster.
 
 
 
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list.
 
 
 
==Virtual Memory==
 
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.
 
  
 
==Setting up Round Table==
 
==Setting up Round Table==
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual
+
"Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.
As explained in the manual, "Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.
 
 
 
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.
 
 
 
 
===Setting up Host Server===
 
===Setting up Host Server===
If we want to add machines to the Round Table, for example 'nun' and 'samekh', we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.
+
# Ssh to nfs-soft2 and become root
 
+
# Open a port in the firewall
  $ cat arthor.cfg
+
#: <source>firewall-cmd --permanent --add-port=<port_number>/tcp
  MaxThreadsPerSearch=4
+
firewall-cmd --reload
  AutomaticIndex=false
+
</source>
  DATADIR=<Directory where smiles are located>
+
# Go to Arthor Config Directory
 
+
#: <source>cd /export/soft2/arthor_configs/arthor-<version></source>
We then run the jar server on each of these host machines containing data on any available port.
+
# Create Round Table Head configuration file. Here is an example:
 
+
# <source>
  java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port>
+
[RoundTable]
 
+
RemoteClient=http://10.20.0.41:8008
For our local machine, the arthor.cfg file will look different.
+
RemoteClient=http://10.20.5.19:8008
 
+
Resolver=https://sw.docking.org/util/smi2mol?smi=%s
  $ cat arthor.cfg
+
</source>
  [RoundTable]  
+
# Now ssh into a machine you wish to run the round table head on and become root
  RemoteClient=http://skynet:<port number where jar server is running>/
+
# Change your shell to bash if you havn't already
  RemoteClient=http://hal:<port number where jar server is running>/
+
#: <source>bash</source>
 
+
# Create a screen
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.
+
#: <source>screen -S <screen_name></source>
 
+
# Prepare Arthor Config Path
Then run the following command on n-1-136:
+
#: <source>export ARTHOR_CONFIG="/nfs/soft2/arthor_configs/arthor-<version>/<round_table_head>.cfg"</source>
+
# Launch java
  java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port>
+
#: <source>java -jar /nfs/soft2/arthor_configs/arthor-<version>/arthor-<version>-centos7/java/arthor.jar --httpPort=<port_number></source>
  
 +
== Active Arthor Instances ==
 
===Public Arthor===
 
===Public Arthor===
 
{| class="wikitable"
 
{| class="wikitable"
Line 265: Line 134:
 
! CentOS 7 Machine
 
! CentOS 7 Machine
 
! Port
 
! Port
! Total Files Size
 
! Arthor Install Location
 
 
! Round Table Data Directory
 
! Round Table Data Directory
! Active
 
 
|-
 
|-
 
| samekh
 
| samekh
 
| 10.20.0.41:8000
 
| 10.20.0.41:8000
| 2.4TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/public_arthor/
 
| /local2/public_arthor/
| active
 
 
|-
 
|-
 
| nun
 
| nun
 
| 10.20.0.40:8000
 
| 10.20.0.40:8000
| 2.4TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/public_arthor/
 
| /local2/public_arthor/
| active
 
|-
 
| n-9-22
 
| 10.20.9.22:8000
 
| 2.4TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
| /export/db4/public_arthor/
 
| active
 
 
|-
 
|-
 
|}
 
|}
Line 298: Line 151:
 
! CentOS 7 Machine
 
! CentOS 7 Machine
 
! Port
 
! Port
! Database
 
! Arthor Install Location
 
 
! Round Table Data Directory
 
! Round Table Data Directory
! Active
 
 
|-
 
|-
 
| samekh
 
| samekh
 
| 10.20.0.41:8080
 
| 10.20.0.41:8080
| Enamine_REAL_Q2-2020-All-13B
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/arthor_database/
 
| /local2/arthor_database/
| active
 
 
|-
 
|-
 
| nun
 
| nun
 
| 10.20.0.40:8080
 
| 10.20.0.40:8080
| Enamine_REAL_Q2-2020-All-41B
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/arthor_database/
 
| /local2/arthor_database/
| active
 
 
|-
 
|-
 
|}
 
|}
Line 324: Line 168:
 
! CentOS 7 Machine
 
! CentOS 7 Machine
 
! Port
 
! Port
! Database
 
! Total Files Size
 
! Arthor Install Location
 
 
! Round Table Data Directory
 
! Round Table Data Directory
!Active
 
 
|-
 
|-
 
| samekh
 
| samekh
 
| 10.20.0.41:8008
 
| 10.20.0.41:8008
| Enamine_REAL_Q2-2020-All-13B (26 slices)
 
| 4.5TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/arthor_database/
 
| /local2/arthor_database/
| active
 
 
|-
 
|-
 
| nun
 
| nun
 
| 10.20.0.40:8008
 
| 10.20.0.40:8008
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)
 
| 5.6TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
 
| /local2/arthor_database/
 
| /local2/arthor_database/
| active
 
 
|-
 
|-
| n-1-17
+
| nfs-exd
| 10.20.1.17:8008
+
| 10.20.1.113:8008
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)
+
| /export/exd/arthor_database/
| 3.7TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
| /local2/arthor_database/
 
| active
 
 
|-
 
|-
| n-5-32
+
| nfs-exh
| 10.20.5.32:8008
+
| 10.20.5.19:8008
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices)
+
| /export/exh/arthor_database/
| 5.6TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
| /local2/arthor_database/
 
| active
 
|-
 
| n-5-33
 
| 10.20.5.33:8008
 
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)
 
| 5.3TB
 
| /opt/nextmove/arthor/arthor-3.3-centos7/
 
| /local2/arthor_database/
 
| active
 
 
|-
 
|-
 
|}
 
|}
  
===Arthor Local 8081 (Datasets all local to samekh/nun)===
+
== Customizing Arthor Frontend to our needs ==
{| class="wikitable"
+
The frontend Arthor code is located at '''/nfs/exc/arthor_configs/*''' and the '''*''' is based on current running version.
|-
+
=== Add Arthor Download Options ===
! CentOS 7 Machine
+
==== For Arthor 3.4: ====
! Port
+
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html
! Database
+
 
! Total Files Size
+
2. search: '''arthor_tsv_link'''
! Arthor Install Location
+
 
! Round Table Data Directory
+
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:
!Active
+
 
|-
+
              <a id="arthor_tsv_link" href="#"> TSV-500</a>
| samekh
+
              <a id="arthor_tsv_link_5000" href="#"> TSV-5,000</a>
| 10.20.0.41:8081
+
              <a id="arthor_tsv_link_50000" href="#"> TSV-50,000</a>
| Enamine_REAL_Q2-2020-All-13B (26 slices)
+
              <a id="arthor_tsv_link_100000" href="#"> TSV-100,000</a>
| 4.5TB
+
              <a id="arthor_tsv_link_max" href="#"> TSV-max</a>
| /opt/nextmove/arthor/arthor-3.3-centos7/
+
              <a id="arthor_csv_link" href="#"> CSV-500</a>
| /local2/arthor_local_8081/
+
              <a id="arthor_csv_link_5000" href="#"> CSV-5,000</a>
| active
+
              <a id="arthor_csv_link_50000" href="#"> CSV-50,000</a>
|-
+
              <a id="arthor_csv_link_100000" href="#"> CSV-100,000</a>
| nun
+
              <a id="arthor_csv_link_max" href="#"> CSV-max</a>
| 10.20.0.40:8081
+
              <a id="arthor_sdf_link" href="#"> SDF-500</a>
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)
+
              <a id="arthor_sdf_link_5000" href="#"> SDF-5,000</a>
| 4.3TB
+
              <a id="arthor_sdf_link_50000" href="#"> SDF-50,000</a>
| /opt/nextmove/arthor/arthor-3.3-centos7/
+
              <a id="arthor_sdf_link_100000" href="#"> SDF-100,000</a>
| /local2/arthor_local_8081/
+
              <a id="arthor_sdf_link_max" href="#"> SDF-max</a>
| active
+
 
|-
+
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js
|}
+
 
 +
5. search: '''function $(t){'''
 +
 
 +
6. in the function $(t), add these lines:
 +
 
 +
if(document.getElementById("arthor_tsv_link")) {
 +
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link").attr("href",n+".sdf?"+e),i() ("#arthor_tsv_link").attr("href",n+".tsv?"+e),i()("#arthor_csv_link").attr("href",n+".csv?"+e)
 +
}
 +
if (document.getElementById("arthor_tsv_link_5000")) {
 +
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_5000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_5000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_5000").attr("href",n+".csv?"+e)
 +
}
 +
if (document.getElementById("arthor_tsv_link_50000")) {
 +
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_50000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_50000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_50000").attr("href",n+".csv?"+e)
 +
}
 +
if (document.getElementById("arthor_tsv_link_100000")) {
 +
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_100000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_100000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_100000").attr("href",n+".csv?"+e)
 +
}
 +
if (document.getElementById("arthor_tsv_link_max")) {
 +
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_max").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_max").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_max").attr("href",n+".csv?"+e)
 +
}
 +
 
 +
=== Take out Similarity Button ===
 +
vim .extract/webapps/ROOT/WEB-INF/static/index.html
 +
search: '''Similarity'''
 +
Comment out this line '''< li value="Similarity" onclick="setSearchType(this)" class="first"> Similarity </li >''' //added spaces at the beginning and end so prevent wiki from converting it
 +
Then add "first" in Substructure's class
 +
=== Hyperlink to zinc20 ===
 +
vim .extract/webapps/ROOT/WEB-INF/static/js/index.js
 +
search: '''table_name'''
 +
*find this line "< b>" + d + "< /b>"
 +
*replace with '''"< b><a target='_blank' href='https://zinc20.docking.org/substances/"+d+"'>" + d + "</a></b >"''' //added spaces at the beginning and end so prevent wiki from converting it
 +
 
 +
=== Make Input Box Work ===
 +
At the end of the Arthor config file add this:
 +
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s
 +
To copy smiles in the input box:
 +
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js
 +
    search this: “var e=t.src.smiles()”
 +
    add this after the semi-colon
 +
        document.getElementById("ar_text_input").value = e;
 +
 
 +
=== Add Ask Questions ===
 +
# Go to frontend directory
 +
#: <source>vim /export/soft2/arthor_configs/arthor-3.4.2/.extract/webapps/ROOT/WEB-INF/static/index.html</source>
 +
# Search opt-box-border
 +
# Create another div in the compass of two big divs
 +
# Add this
 +
#: <source>
 +
<div class="opt-box-border">
 +
        <label>Ask Questions</label>
 +
        Email us: jjiteam@googlegroups.com
 +
</div>
 +
</source>
 +
 
 +
== Restarting Arthor Instance(s) Instructions ==
 +
# Ssh to machine with respective Arthor instance and become root
 +
# execute '''run_arthors_on_reboot.sh''' to show restart all instances on the machine
 +
#: <source>
 +
bash /root/run_arthors_on_reboot.sh
 +
</source>
 +
# execute '''start_arthor_script.sh''' to restart specific Arthor instance. It will show you options to choose from.
 +
#: <source>
 +
bash /nfs/soft2/arthor_configs/start_arthor_script.sh
 +
</source>

Latest revision as of 01:07, 29 January 2022

Introduction

Here is the link to Arthor's manual

  • Username: ucsf@nextmovesoftware.com
  • Password: <Ask jjiteam@googlegroups.com>

Arthor configurations and the frontend files are consolidated in /nfs/soft2/arthor_configs/.

/nfs/soft2/arthor_configs/start_arthor_script.sh can start/restart Arthor instances on respective machines.

Launch the script to see the options available.

How To Download Arthor

  1. Ssh to nfs-soft2 and become root. Prepare directory
     mkdir /export/soft2/arthor_configs/arthor-<version> && cd /export/soft2/arthor_configs/arthor-<version>
  2. Download Software with this link
    • Username: ucsf@nextmovesoftware.com
    • Password: <Ask jjiteam@googlegroups.com>
  3. Go to releases. Look for smallworld-java-<version>.tar.gz and copy the link address.
  4. Download using wget
     wget --user ucsf@nextmovesoftware.com --password <Ask jjiteam@googlegroups.com> <link address>
  5. Decompress the file
    •  tar -xvf <file_name>

How To Launch Arthor For The First Time

Prepare Files and Directories

  1. Ssh to nfs-exc and become root
  2. Open a port in the firewall
    firewall-cmd --permanent --add-port=<port_number>/tcp 
    firewall-cmd --reload
  3. Go to Arthor Config directory
    cd /export/soft2/arthor_configs/arthor-<latest_version>
  4. Create an Arthor config file
    vim <name_of_file>.cfg
    • Add these lines in the file. Check the manual for more options.
    DataDir=/local2/public_arthor
    MaxConcurrentSearches=6
    MaxThreadsPerSearch=8
    AutomaticIndex=false
    AsyncHitCountMax=20000
    Depiction=./depict/bot/svg?w=%w&h=%h&svgunits=px&smi=%s&zoom=0.8&sma=%m&smalim=1
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s

Start Arthor Instance

  1. Now ssh into a machine you wish to run an Arthor instance on and become root
  2. Change your shell to bash if you havn't already
    bash
  3. Create a screen
    screen -S <screen_name>
  4. Prepare Arthor Config Path
    export ARTHOR_CONFIG="/nfs/soft2/arthor_configs/arthor-<version>/<name_of_config_file>.cfg"
  5. Launch java
    java -jar /nfs/soft2/arthor_configs/arthor-<version>/arthor-<version>-centos7/java/arthor.jar --httpPort=<port_number>

Configuration Details

  • DataDir: This is the directory where the Arthor data files live. Location where the index files will be created and loaded from.
  • MaxConcurrentSearches: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.
  • MaxThreadsPerSearch: The number of threads to use for both ATDB and ATFP searches
  • Set AutomaticIndex to false if you don't want new smiles files added to the data directory to be indexed automatically
  • AsyncHitCountMax: The upper-bound for the number of hits to retrieve in background searches.
  • Resolver: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.

Check Arthor manual for more configuration options

How to Build Arthor Databases

We can build Arthor Databases anywhere. Consolidate smiles into one directory so you can index them all one by one.

Just use the script located at /nfs/home/jjg/scripts/arthor_index_script.sh at the directory where you c

Here is the content of the script:

#!/bin/bash

version="3.4.2"

export ARTHOR_DIR=/nfs/soft2/arthor_configs/arthor-$version/arthor-$version-centos7/
export PATH=$ARTHOR_DIR/bin/:$PATH

target="*.smi"

for j in $target
do
        echo 'smi2atdb -j 4 -p '$j' '${j}'.atdb'
        smi2atdb -j 4 -p $j ${j}.atdb
        echo 'atdb2fp -j 4 '$j'.atdb'
	atdb2fp -j 4 ${j}.atdb
done

Command Details

smi2atdb creates the atdb files needed for Substructure searching.

  • -j is the amount of threads to use to index the smiles file
  • -p stores the position of the original file

atdb2fp makes substructure searching faster

Setting up Round Table

"Round Table allows you to serve and split chemical searches across multiple host machines. The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search. Communication is done using the existing Web APIs.

Setting up Host Server

  1. Ssh to nfs-soft2 and become root
  2. Open a port in the firewall
    firewall-cmd --permanent --add-port=<port_number>/tcp 
    firewall-cmd --reload
  3. Go to Arthor Config Directory
    cd /export/soft2/arthor_configs/arthor-<version>
  4. Create Round Table Head configuration file. Here is an example:
  5. [RoundTable]
    RemoteClient=http://10.20.0.41:8008
    RemoteClient=http://10.20.5.19:8008
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s
  6. Now ssh into a machine you wish to run the round table head on and become root
  7. Change your shell to bash if you havn't already
    bash
  8. Create a screen
    screen -S <screen_name>
  9. Prepare Arthor Config Path
    export ARTHOR_CONFIG="/nfs/soft2/arthor_configs/arthor-<version>/<round_table_head>.cfg"
  10. Launch java
    java -jar /nfs/soft2/arthor_configs/arthor-<version>/arthor-<version>-centos7/java/arthor.jar --httpPort=<port_number>

Active Arthor Instances

Public Arthor

CentOS 7 Machine Port Round Table Data Directory
samekh 10.20.0.41:8000 /local2/public_arthor/
nun 10.20.0.40:8000 /local2/public_arthor/

Arthor Round Table Head

CentOS 7 Machine Port Round Table Data Directory
samekh 10.20.0.41:8080 /local2/arthor_database/
nun 10.20.0.40:8080 /local2/arthor_database/

Arthor Round Table Nodes

CentOS 7 Machine Port Round Table Data Directory
samekh 10.20.0.41:8008 /local2/arthor_database/
nun 10.20.0.40:8008 /local2/arthor_database/
nfs-exd 10.20.1.113:8008 /export/exd/arthor_database/
nfs-exh 10.20.5.19:8008 /export/exh/arthor_database/

Customizing Arthor Frontend to our needs

The frontend Arthor code is located at /nfs/exc/arthor_configs/* and the * is based on current running version.

Add Arthor Download Options

For Arthor 3.4:

1. vim .extract/webapps/ROOT/WEB-INF/static/index.html

2. search: arthor_tsv_link

3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:

              <a id="arthor_tsv_link" href="#"> TSV-500</a>
              <a id="arthor_tsv_link_5000" href="#"> TSV-5,000</a>
              <a id="arthor_tsv_link_50000" href="#"> TSV-50,000</a>
              <a id="arthor_tsv_link_100000" href="#"> TSV-100,000</a>
              <a id="arthor_tsv_link_max" href="#"> TSV-max</a>
              <a id="arthor_csv_link" href="#"> CSV-500</a>
              <a id="arthor_csv_link_5000" href="#"> CSV-5,000</a>
              <a id="arthor_csv_link_50000" href="#"> CSV-50,000</a>
              <a id="arthor_csv_link_100000" href="#"> CSV-100,000</a>
              <a id="arthor_csv_link_max" href="#"> CSV-max</a>
              <a id="arthor_sdf_link" href="#"> SDF-500</a>
              <a id="arthor_sdf_link_5000" href="#"> SDF-5,000</a>
              <a id="arthor_sdf_link_50000" href="#"> SDF-50,000</a>
              <a id="arthor_sdf_link_100000" href="#"> SDF-100,000</a>
              <a id="arthor_sdf_link_max" href="#"> SDF-max</a>

4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js

5. search: function $(t){

6. in the function $(t), add these lines:

if(document.getElementById("arthor_tsv_link")) {
       var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link").attr("href",n+".sdf?"+e),i() ("#arthor_tsv_link").attr("href",n+".tsv?"+e),i()("#arthor_csv_link").attr("href",n+".csv?"+e)
}
if (document.getElementById("arthor_tsv_link_5000")) {
       var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_5000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_5000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_5000").attr("href",n+".csv?"+e)
}
if (document.getElementById("arthor_tsv_link_50000")) {
       var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_50000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_50000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_50000").attr("href",n+".csv?"+e)
}
if (document.getElementById("arthor_tsv_link_100000")) {
       var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_100000").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_100000").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_100000").attr("href",n+".csv?"+e)
}
if (document.getElementById("arthor_tsv_link_max")) {
       var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+"/dt/"+E(s.b.table)+"/search";i()("#arthor_sdf_link_max").attr("href",n+".sdf?"+e),i()("#arthor_tsv_link_max").attr("href",n+".tsv?"+e),i()("#arthor_csv_link_max").attr("href",n+".csv?"+e)
}

Take out Similarity Button

vim .extract/webapps/ROOT/WEB-INF/static/index.html
search: Similarity

Comment out this line < li value="Similarity" onclick="setSearchType(this)" class="first"> Similarity //added spaces at the beginning and end so prevent wiki from converting it Then add "first" in Substructure's class

Hyperlink to zinc20

vim .extract/webapps/ROOT/WEB-INF/static/js/index.js
search: table_name
*find this line "< b>" + d + "< /b>"
*replace with "< b><a target='_blank' href='https://zinc20.docking.org/substances/"+d+"'>" + d + "</a>" //added spaces at the beginning and end so prevent wiki from converting it

Make Input Box Work

At the end of the Arthor config file add this:
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s
To copy smiles in the input box:
   vim .extract/webapps/ROOT/WEB-INF/static/js/index.js
   search this: “var e=t.src.smiles()”
   add this after the semi-colon
       document.getElementById("ar_text_input").value = e;

Add Ask Questions

  1. Go to frontend directory
    vim /export/soft2/arthor_configs/arthor-3.4.2/.extract/webapps/ROOT/WEB-INF/static/index.html
  2. Search opt-box-border
  3. Create another div in the compass of two big divs
  4. Add this
    <div class="opt-box-border">
            <label>Ask Questions</label>
            Email us: jjiteam@googlegroups.com
    </div>

Restarting Arthor Instance(s) Instructions

  1. Ssh to machine with respective Arthor instance and become root
  2. execute run_arthors_on_reboot.sh to show restart all instances on the machine
    bash /root/run_arthors_on_reboot.sh
  3. execute start_arthor_script.sh to restart specific Arthor instance. It will show you options to choose from.
    bash /nfs/soft2/arthor_configs/start_arthor_script.sh