Arthor Documentation for Future Developer: Difference between revisions
Line 233: | Line 233: | ||
As explained in the manual, "Round Table allows you to serve and split chemical searches across multiple host machines. The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search. Communication is done using the existing Web APIs. | As explained in the manual, "Round Table allows you to serve and split chemical searches across multiple host machines. The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search. Communication is done using the existing Web APIs. | ||
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table. | Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table. | ||
===Setting up Host Server=== | |||
If we want to add machines to the Round Table, for example 'nun' and 'samekh', we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given. | |||
$ cat arthor.cfg | |||
MaxThreadsPerSearch=4 | |||
AutomaticIndex=false | |||
DATADIR=<Directory where smiles are located> | |||
We then run the jar server on each of these host machines containing data on any available port. | |||
java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port> | |||
For our local machine, the arthor.cfg file will look different. | |||
$ cat arthor.cfg | |||
[RoundTable] | |||
RemoteClient=http://skynet:<port number where jar server is running>/ | |||
RemoteClient=http://hal:<port number where jar server is running>/ | |||
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration. | |||
Then run the following command on n-1-136: | |||
java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port> | |||
===Arthor Round Table Information=== | |||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
! Database Cluster | |||
! CentOS 7 Machine | ! CentOS 7 Machine | ||
! Private IP | ! Private IP | ||
Line 242: | Line 270: | ||
!Active | !Active | ||
|- | |- | ||
| Enamine_REAL_Q2-2020-13.5B | |||
| n-1-136 (Round Table Server) | | n-1-136 (Round Table Server) | ||
| 10.20.10.136 | | 10.20.10.136 | ||
Line 248: | Line 277: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-1-16 | | n-1-16 | ||
| 10.20.1.16 | | 10.20.1.16 | ||
Line 254: | Line 284: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-1-17 | | n-1-17 | ||
| 10.20.1.17 | | 10.20.1.17 | ||
Line 260: | Line 291: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-1-20 | | n-1-20 | ||
| 10.20.1.20 | | 10.20.1.20 | ||
Line 266: | Line 298: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-5-34 | | n-5-34 | ||
| 10.20.5.34 | | 10.20.5.34 | ||
Line 272: | Line 305: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-5-35 | | n-5-35 | ||
| 10.20.5.35 | | 10.20.5.35 | ||
Line 278: | Line 312: | ||
| active | | active | ||
|- | |- | ||
| | |||
| n-5-34 | | n-5-34 | ||
| 10.20.5.34 | | 10.20.5.34 | ||
Line 284: | Line 319: | ||
| active | | active | ||
|- | |- | ||
| | |||
| shin | | shin | ||
| 10.20.0.1 | | 10.20.0.1 | ||
Line 290: | Line 326: | ||
| not active | | not active | ||
|- | |- | ||
| | |||
| zayin | | zayin | ||
| 10.20.0.2 | | 10.20.0.2 | ||
Line 296: | Line 333: | ||
| not active | | not active | ||
|- | |- | ||
| | |||
| qof | | qof | ||
| 10.20.9.29 | | 10.20.9.29 | ||
Line 302: | Line 340: | ||
| active | | active | ||
|- | |- | ||
| | |||
| lamed | | lamed | ||
| 10.20.9.15 | | 10.20.9.15 | ||
Line 309: | Line 348: | ||
|- | |- | ||
|} | |} | ||
Revision as of 23:56, 2 October 2020
Written by Jennifer Young on December 16, 2019. Last edited January 30, 2020
Install and Set Up on TomCat
Arthor currently runs on n-1-136, which runs CentOS Linux release 7.7.1908 (Core). You can check the version of CentOS with the following command
cat /etc/centos-release
Check your current version of Java with the following command:
java -version
On n-1-136 we are running openjdk version "1.8.0_222", OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode) If Java is not installed, install it using yum
See this wiki page for more detailed information about installing Tomcat on our cluster
http://wiki.docking.org/index.php/Tomcat_Installation
Open port for Arthor
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened. https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/
Step 1: Check Port Status
Check that the port is not open and that Apache is not showing that port.
netstat -na | grep <port number you are checking>
lsof -i -P |grep http
Step 2: Check Port Status in IP Tables
iptables-save | grep <port number you are checking>
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn't want to edit it and break something.
Step 4: Open Firewall Ports
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld. Run as root.
firewall-cmd --add-port=<port number you are adding>/tcp --permanent
You need to reload the firewall after a change is made.
firewall-cmd --reload
Step 5: Check that port is working
To check that the port is active, run.
iptables -nL
You should see something along the lines of:
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:<port number you're adding> ctstate NEW,UNTRACKED
How to run standalone Arthor instance
Step 1: Use or start a bash shell
You can check your default shell using
echo $SHELL
If your default shell is csh, use
bash
to start a new bash shell in the current terminal window. Note that echo $SHELL will show you your default shell regardless of the current shell.
Step 2: Set your environment variables
export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7 export PATH=$ARTHOR_DIR/bin/:$PATH
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test. The PATH environment variable is needed if you wish to use the Arthor tools from the command line
Step 3: Run the arthor-server.jar
java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort <your httpPort>
Setting environment variables for TomCat Server
Set the environment variables in the setenv.sh file. Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat. As of December 2019, we are running 9.0.27 on n-1-136.
vim /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file
export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg
Here is an example of the arthor.cfg file:
# Arthor generated config file BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin DATADIR=/usr/local/tomcat/arthor_data STAGEDIR=/usr/local/arthor_data/stage NTHREADS=64 . NODEAFFINITY=true SearchAsYouDraw=true AutomaticIndex=true DEPICTION=./depict/bot/svg?w=%w&h=%h&svgunits=px&smi=%s&zoom=0.8&sma=%m&smalim=1 RESOLVER=
Important parts of the arthor.cfg file
BINDIR is the location of the Arthor command line binaries. These are used to generate the Arthor index files and to perform searches directly on n-1-136. An example of this would be using atdbgrep for substructure search.
DATADIR This is the directory where the Arthor data files live. Location where the index files will be created and loaded from.
STAGEDIR Location where the index files will be built before being moved into the DATADIR.
NTHREADS The number of threads to use for both ATDB and ATFP searches
Set AutomaticIndex to false if you don't want new smiles files added to the data directory to be indexed automatically
Background
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!
Checking Memory Usage
Before building arthor indexes, it's always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:
df -h /<directory with disc>
Building Large Databases
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches > 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches > 500M molecules.
Feel free to modify it if you think a better method exists.
import subprocess import sys import os from os import listdir from os.path import isfile, join mypath = "<Path to directory holding .smi files>" onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] onlyfiles.sort() create_fp = True cur_mols = 0 lower_bound = 500000000 upper_bound = 600000000 files_to_merge = [] def merge_files(f_t_m): arr = f_t_m[0].split(".") arr2 = f_t_m[len(f_t_m) - 1].split(".") file_name_merge = (arr[0] + "_" + arr2[0] + ".smi") print ("File being created: " + file_name_merge) for file in f_t_m: tmp = file.split(".") process = subprocess.Popen("cat " + join(mypath, file) + " >> " + file_name_merge, shell=True) process.wait() for file in onlyfiles: arr = file.split(".") if (arr[len(arr) - 1] == "smi"): print("Working with " + file) mol = sum(1 for line in open(join(mypath, file))) print(file, mol, cur_mols) if (cur_mols + mol > lower_bound): if (cur_mols + mol < upper_bound): files_to_merge.append(file) merge_files(files_to_merge) cur_mols = 0 files_to_merge.clear() else: merge_files(files_to_merge) files_to_merge.clear() files_to_merge.append(file) merge_files(files_to_merge) cur_mols = 0 files_to_merge.clear() else: cur_mols += mol files_to_merge.append(file) if (len(files_to_merge) != 0): merge_files(files_to_merge)
Building Arthor Indexes
Once you've merged the .smi files together, it's time to start building the databases themselves. To do this we use the command
smi2atdb -j 0 -p <The .smi file> <The .atdb>
The flag "-j 0" enables parallel generation and utilizes all available processors to generate the .atdb file. The "-p" flag stores the offset position in the ATDB file. Since we're building indexes for the Web Application, you must use the "-p" flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable "create_fp" to false if you don't want to create .atdb.fp files (refer to page 9 in the Arthor documentation).
import subprocess import sys import os from os import listdir from os.path import isfile, join mypath = "<Path containing the .smi files" onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] create_fp = True for file in onlyfiles: arr = file.split(".") if (arr[len(arr) - 1] == "smi"): process = subprocess.Popen("/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb".format(join(mypath, file), arr[0]), shell=True) process.wait() print("SUCCESS! {0}.atdb file was created!".format(arr[0])) if (create_fp): process = subprocess.Popen("/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb".format(arr[0]), shell=True) process.wait() print("SUCCESS! {0}.atdb.fp file was created!".format(arr[0]))
Uploading Indexes to the Web Application
One can upload indexes to the Web Application by changing the "DATADIR" variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.
Further Arthor Optimizations
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.
NodeAffinity NUMA: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))
AsyncHitCountAllowed=true|false After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed.
AsyncHitCountMax=# The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything.
MaxConcurrentSearches=# Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time.
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open. Default: 6
Binary Fingerprint Folding Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster.
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list.
Virtual Memory
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.
Setting up Round Table
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual As explained in the manual, "Round Table allows you to serve and split chemical searches across multiple host machines. The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search. Communication is done using the existing Web APIs.
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.
Setting up Host Server
If we want to add machines to the Round Table, for example 'nun' and 'samekh', we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.
$ cat arthor.cfg MaxThreadsPerSearch=4 AutomaticIndex=false DATADIR=<Directory where smiles are located>
We then run the jar server on each of these host machines containing data on any available port.
java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port>
For our local machine, the arthor.cfg file will look different.
$ cat arthor.cfg [RoundTable] RemoteClient=http://skynet:<port number where jar server is running>/ RemoteClient=http://hal:<port number where jar server is running>/
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.
Then run the following command on n-1-136:
java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort <port>
Arthor Round Table Information
Database Cluster | CentOS 7 Machine | Private IP | Arthor Install Location | Round Table Data Directory | Active |
---|---|---|---|---|---|
Enamine_REAL_Q2-2020-13.5B | n-1-136 (Round Table Server) | 10.20.10.136 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /zinc2/auto_atdb | active |
n-1-16 | 10.20.1.16 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
n-1-17 | 10.20.1.17 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
n-1-20 | 10.20.1.20 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
n-5-34 | 10.20.5.34 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
n-5-35 | 10.20.5.35 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
n-5-34 | 10.20.5.34 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /local2/auto_atdb/ | active | |
shin | 10.20.0.1 | /opt/nextmove/arthor/arthor-3.1-centos7/ | /export/db/arthor | not active | |
zayin | 10.20.0.2 | /opt/nextmove/arthor/arthor-3.0-rt-beta-linux | /export/exa/work/jyoung/arthor_round_table_zayin | not active | |
qof | 10.20.9.29 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /export/ex9/work/btingle/auto_atdb | active | |
lamed | 10.20.9.15 | /opt/nextmove/arthor/arthor-3.3-centos7/ | /export/ex6/work/btingle/auto_atdb | active |