<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.docking.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jgutie11</id>
	<title>DISI - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.docking.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jgutie11"/>
	<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Special:Contributions/Jgutie11"/>
	<updated>2026-04-08T22:41:32Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Main_Page&amp;diff=15113</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Main_Page&amp;diff=15113"/>
		<updated>2023-01-25T00:21:24Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;mainpage&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;!--&lt;br /&gt;
--&amp;gt;__NOTOC__&lt;br /&gt;
Welcome to [[DISI:About | DISI]] (pronounce: /dizzy/ which is short for DISI Is Still Incomplete), a wiki focusing on computational pharmacology and ligand discovery tools hosted by docking.org on behalf of the [[Irwin Lab]] and [[Shoichet Lab]].  This wiki aims to serve several [[:Category:Roles | constituencies]], and is a work in progress.  Please [[Contribute |work with us and help us improve]] it.&lt;br /&gt;
&lt;br /&gt;
== WHO are you? ==&lt;br /&gt;
This site contains information for lab members and for everyone using our public tools:&lt;br /&gt;
* [[Welcome group members | Lab members with ssh]]&lt;br /&gt;
* [[Welcome web user | Non members without ssh ]]&lt;br /&gt;
&lt;br /&gt;
== WHY are you here? ==&lt;br /&gt;
* [[:Category:Docking | Docking ]]&lt;br /&gt;
* [[:Category:Systems pharmacology | Systems pharmacology]]&lt;br /&gt;
* [[:Category:Aggregation | Colloidal aggregation of small molecules]]&lt;br /&gt;
* [[:Category:Cheminformatics | Chemical informatics]] &lt;br /&gt;
* [[DOCKovalent_3.7]] - Covalent docking&lt;br /&gt;
* more [[:Category:Topic | topics]]&lt;br /&gt;
&lt;br /&gt;
== Do you have a question? ==&lt;br /&gt;
We are trying to organize these pages:&lt;br /&gt;
&lt;br /&gt;
* [[:Category:Manual | WHAT ]] - manuals, such as they are.  &lt;br /&gt;
* [[:Category:Tutorials | HOW ]] - Tutorials, such as they are.&lt;br /&gt;
* [[:Category:Theory | WHY ]] - still working on turning our concept into an idea.&lt;br /&gt;
* [[:Category:Article_type | Other ]] - Everything else&lt;br /&gt;
&lt;br /&gt;
== Still haven&#039;t found what you&#039;re looking for? ==&lt;br /&gt;
Try the search bar top right to see if that works...&lt;br /&gt;
&lt;br /&gt;
[[Category:Info]]&lt;br /&gt;
[[Category:Organization]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Main_Page&amp;diff=15112</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Main_Page&amp;diff=15112"/>
		<updated>2023-01-25T00:21:12Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;div id=&amp;quot;mainpage&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;!--&lt;br /&gt;
--&amp;gt;__NOTOC__&lt;br /&gt;
Welcome to [[DISI:About | DISI]] (pronounce: /dizzy/ which is short for DISI Is Still Incomplete), a wiki focusing on computational pharmacology and ligand discovery tools hosted by docking.org on behalf of the [[Irwin Lab]] and [[Shoichet Lab]].  This wiki aims to serve several [[:Category:Roles | constituencies]], and is a work in progress.  Please [[Contribute |work with us and help us improve]] it.&lt;br /&gt;
&lt;br /&gt;
== WHO are you? ==&lt;br /&gt;
This site contains information for lab members and for everyone using our public tools:&lt;br /&gt;
* [[Welcome group members | Lab members with ssh]]&lt;br /&gt;
* [[Welcome web user | Non members without ssh ]]&lt;br /&gt;
&lt;br /&gt;
== WHY are you here? ==&lt;br /&gt;
* [[:Category:Docking | Docking ]]&lt;br /&gt;
* [[:Category:Systems pharmacology | Systems pharmacology]]&lt;br /&gt;
* [[:Category:Aggregation | Colloidal aggregation of small molecules]]&lt;br /&gt;
* [[:Category:Cheminformatics | Chemical informatics]] &lt;br /&gt;
* [[DOCKovalent_3.7]] - Covalent docking&lt;br /&gt;
* more [[:Category:Topic | topics]]&lt;br /&gt;
&lt;br /&gt;
== Do you have a question? ==&lt;br /&gt;
We are trying to organize these pages:&lt;br /&gt;
&lt;br /&gt;
* [[:Category:Manual | WHAT ]] - manuals, such as they are.  &lt;br /&gt;
* [[:Category:Tutorials | HOW ]] - Tutorials, such as they are.&lt;br /&gt;
* [[:Category:Theory | WHY ]] - still working on turning our concept into an idea.&lt;br /&gt;
* [[:Category:Article_type | Other ]] - Everything else&lt;br /&gt;
&lt;br /&gt;
== Still haven&#039;t found what you&#039;re looking for? ==&lt;br /&gt;
Try the search bar top right to see if that works...&lt;br /&gt;
&lt;br /&gt;
test&lt;br /&gt;
[[Category:Info]]&lt;br /&gt;
[[Category:Organization]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=File:Dash_1.png&amp;diff=13557</id>
		<title>File:Dash 1.png</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=File:Dash_1.png&amp;diff=13557"/>
		<updated>2021-05-14T00:59:52Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: test 3&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
test 3&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13524</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13524"/>
		<updated>2021-05-04T17:12:44Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Local 8081 (Datasets all local to samekh/nun) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;***Arthor configs and frontend code are located in /nfs/exc/arthor_configs/***&#039;&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| REAL-Space-21Q1(private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| REAL_Space_21Q2(super private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| REAL-Space-21Q1-M, REAL-Space-21Q1-S, carboxylates_21Q2&lt;br /&gt;
| 4.0TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| 345_acids_20Q4, REAL_Space_21Q2-M, REAL_Space_21Q2-S, acids_21Q2, zinc_21Q1_H01~H25&lt;br /&gt;
| 3.1TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| zinc_21Q1_H26~H30&lt;br /&gt;
| 6.6TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| acids_21Q2&lt;br /&gt;
| 9.1GB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13523</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13523"/>
		<updated>2021-05-04T17:06:12Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;***Arthor configs and frontend code are located in /nfs/exc/arthor_configs/***&#039;&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| REAL-Space-21Q1(private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| REAL_Space_21Q2(super private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| REAL-Space-21Q1-M, REAL-Space-21Q1-S, carboxylates_21Q2&lt;br /&gt;
| 4.0TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| 345_acids_20Q4, REAL_Space_21Q2-M, REAL_Space_21Q2-S, acids_21Q2, zinc_21Q1_H01~H25&lt;br /&gt;
| 3.1TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| zinc_21Q1_H26~H30&lt;br /&gt;
| 6.6TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| acids_21Q2&lt;br /&gt;
| 9.1GB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13522</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13522"/>
		<updated>2021-05-04T00:21:02Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Local 8081 (Datasets all local to samekh/nun) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;***Arthor configs and frontend code are located in /nfs/exc/arthor_configs/***&#039;&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| REAL-Space-21Q1(private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| REAL_Space_21Q2(super private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| REAL-Space-21Q1-M, REAL-Space-21Q1-S, carboxylates_21Q2&lt;br /&gt;
| 4.0TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| 345_acids_20Q4, REAL_Space_21Q2-M, REAL_Space_21Q2-S, acids_21Q2, zinc_21Q1_H01~H25&lt;br /&gt;
| 3.1TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| zinc_21Q1_H26~H30&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| acids_21Q2&lt;br /&gt;
| 9.1GB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| (TBA)&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13521</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13521"/>
		<updated>2021-05-04T00:19:41Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Setting up Round Table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;***Arthor configs and frontend code are located in /nfs/exc/arthor_configs/***&#039;&#039;&#039;===&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| REAL-Space-21Q1(private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| REAL_Space_21Q2(super private)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| REAL-Space-21Q1-M, REAL-Space-21Q1-S, carboxylates_21Q2&lt;br /&gt;
| 4.0TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| 345_acids_20Q4, REAL_Space_21Q2-M, REAL_Space_21Q2-S, acids_21Q2, zinc_21Q1_H01~H25&lt;br /&gt;
| 3.1TB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| zinc_21Q1_H26~H30&lt;br /&gt;
| (TBA)&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| acids_21Q2&lt;br /&gt;
| 9.1GB&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13520</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13520"/>
		<updated>2021-05-03T23:58:15Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Public Arthor */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_use_Globus&amp;diff=13472</id>
		<title>How to use Globus</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_use_Globus&amp;diff=13472"/>
		<updated>2021-04-06T00:34:16Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[https://www.globus.org/ Globus] is a non-profit service for moving, syncing, and sharing large amounts of data asynchronously in the background. Transfers are done from and to, so called, endpoints. In order to perform a file transfer from one location to another using the Globus service, both ends must have an endpoint.&lt;br /&gt;
&lt;br /&gt;
==== Zinc Globus Endpoint is: ====&lt;br /&gt;
;ucsfbks#zinc&lt;br /&gt;
default path= /mnt/nfs/ex3/published/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 1 ==&lt;br /&gt;
====Create an account to log in to Globus. There are a couple ways:====&lt;br /&gt;
*[https://app.globus.org/ Use an Organization&#039;s login]&lt;br /&gt;
**If you are associated to an organization in the drop down list, you can use your organization&#039;s login to create an account.&lt;br /&gt;
**You can also sign up using a Google account or ORCiD ID&lt;br /&gt;
*[https://www.globusid.org/create Create a Globus ID]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 2 ==&lt;br /&gt;
====To install and create a Globus Personal Endpoint:====&lt;br /&gt;
*Log in.&lt;br /&gt;
*On the left side of the page, click the &amp;quot;ENDPOINTS&amp;quot; button on the navigation bar.&lt;br /&gt;
*Then, on the top right, click &amp;quot;Create a Personal Endpoint&amp;quot;.&lt;br /&gt;
*Download based on your Operating System.&lt;br /&gt;
*Execute the file and follow the on screen install instructions.&lt;br /&gt;
*Double check that your collection is available by going to &amp;quot;ENDPOINTS&amp;quot; then to &amp;quot;Administered By You&amp;quot; which is on the top middle of the page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;***Instructions on how to install are provided [https://www.globus.org/globus-connect-personal here] ***&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 3 ==&lt;br /&gt;
Once you have your endpoint, go back to the file manager tab on the navigation bar to begin file transfer.&lt;br /&gt;
&lt;br /&gt;
[[File:Globus1.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
*Click on the collection input box on the left panel and then click the &amp;quot;Your Collections&amp;quot; tab.&lt;br /&gt;
*Choose the collection that you made. &lt;br /&gt;
*Now you should see something like this:&lt;br /&gt;
**[[File:Collection.png|800px]]&lt;br /&gt;
*Next, on the right panel, click on the collection input box and type in ucsfbks#zinc&lt;br /&gt;
*Click on the ucsfbks#zinc collection.&lt;br /&gt;
*Enter authentication credentials. Contact jjiteam@googlegroups.com for login credentials. &lt;br /&gt;
*Now both panels are ready to transfer files to and from.&lt;br /&gt;
*Choose a file or folder and click the blue start button at the bottom.&lt;br /&gt;
**[[File:Ready.png|1000px]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_use_Globus&amp;diff=13471</id>
		<title>How to use Globus</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_use_Globus&amp;diff=13471"/>
		<updated>2021-04-06T00:31:48Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[https://www.globus.org/ Globus] is a non-profit service for moving, syncing, and sharing large amounts of data asynchronously in the background. Transfers are done from and to, so called, endpoints. In order to perform a file transfer from one location to another using the Globus service, both ends must have an endpoint.&lt;br /&gt;
&lt;br /&gt;
==== Zinc Globus Endpoint is: ====&lt;br /&gt;
;ucsfbks#zinc&lt;br /&gt;
default path= /mnt/nfs/ex3/published/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 1 ==&lt;br /&gt;
====Create an account to log in to Globus. There are a couple ways:====&lt;br /&gt;
*[https://app.globus.org/ Use an Organization&#039;s login]&lt;br /&gt;
**If you are associated to an organization in the drop down list, you can use your organization&#039;s login to create an account.&lt;br /&gt;
**You can also sign up using a Google account or ORCiD ID&lt;br /&gt;
*[https://www.globusid.org/create Create a Globus ID]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 2 ==&lt;br /&gt;
====To install and create a Globus Personal Endpoint:====&lt;br /&gt;
*Log in.&lt;br /&gt;
*On the left side of the page, click the &amp;quot;ENDPOINTS&amp;quot; button on the navigation bar.&lt;br /&gt;
*Then, on the top right, click &amp;quot;Create a Personal Endpoint&amp;quot;.&lt;br /&gt;
*Download based on your Operating System.&lt;br /&gt;
*Execute the file and follow the on screen install instructions.&lt;br /&gt;
*Double check that your collection is available by going to &amp;quot;ENDPOINTS&amp;quot; then to &amp;quot;Administered By You&amp;quot; which is on the top middle of the page.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;***Instructions on how to install are provided [https://www.globus.org/globus-connect-personal here] ***&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Step 3 ==&lt;br /&gt;
Once you have your endpoint, go back to the file manager tab on the navigation bar to begin file transfer.&lt;br /&gt;
&lt;br /&gt;
[[File:Globus1.png|1000px]]&lt;br /&gt;
&lt;br /&gt;
*Click on the collection input box on the left panel and then click the &amp;quot;Your Collections&amp;quot; tab.&lt;br /&gt;
*Choose the collection that you made. &lt;br /&gt;
*Now you should see something like this:&lt;br /&gt;
**[[File:Collection.png|800px]]&lt;br /&gt;
*Next, on the right panel, click on the collection input box and type in ucsfbks#zinc&lt;br /&gt;
*Click on the ucsfbks#zinc collection.&lt;br /&gt;
*Enter authentication credentials. Contact jjg9803@gmail.com, khtang015@gmail.com, and jir322@gmail.com for login credentials. &lt;br /&gt;
*Now both panels are ready to transfer files to and from.&lt;br /&gt;
*Choose a file or folder and click the blue start button at the bottom.&lt;br /&gt;
**[[File:Ready.png|1000px]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Slurm&amp;diff=13470</id>
		<title>Slurm</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Slurm&amp;diff=13470"/>
		<updated>2021-04-05T22:17:58Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Migrating to gimel5 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Slurm user-guide&#039;&#039;&#039;&lt;br /&gt;
=== Submit Jobs with Slurm ===&lt;br /&gt;
&lt;br /&gt;
==== SBATCH-MR (beta) ====&lt;br /&gt;
It is a slurm-version of qsub-mr for submitting job on Slurm queueing system. Note: this is have not been extensively tested yet. Please contact me if the script is not working out. We are hoping to fully migrate to Slurm from the out-dated SGE system. Any error report would be helpful - Khanh&lt;br /&gt;
&lt;br /&gt;
New slurm scripts are located in /nfs/soft/tools/utils/sbatch-slice&lt;br /&gt;
 Just simply replace /nfs/soft/tools/utils/qsub-slice/qsub-mr with /nfs/soft/tools/utils/sbatch-slice/sbatch-mr in your script&lt;br /&gt;
&lt;br /&gt;
To check the status of your job:&lt;br /&gt;
 By username&lt;br /&gt;
  $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
 By jobid&lt;br /&gt;
  $ squeue -j &amp;lt;job_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submit load2d Jobs ====&lt;br /&gt;
&lt;br /&gt;
 $ cd &amp;lt;catalog_shortname&amp;gt;&lt;br /&gt;
 $ source /nfs/exa/work/khtang/ZINC21_load2d/loadenv_zinc21.sh&lt;br /&gt;
 (development) $ sh /nfs/exa/work/khtang/submit_scripts/sbatch_slice/batch_zinc21.slurm &amp;lt;catalog_shortname&amp;gt;.ism&lt;br /&gt;
&lt;br /&gt;
==== Submit DOCK Jobs ====&lt;br /&gt;
&lt;br /&gt;
* ANACONDA Installation (Python 2.7)&lt;br /&gt;
&lt;br /&gt;
Each user is welcome to download anaconda and install into his/her own folder&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.anaconda.com/distribution/&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;wget https://repo.anaconda.com/archive/Anaconda2-2019.10-Linux-x86_64.sh&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
NB: It is also available for Python3, which is our nearest future&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
simple installation via &#039;&#039;/bin/sh Anaconda2-2019.10-Linux-x86_64.sh&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After the installation is completed, you need to install a few packages:&lt;br /&gt;
 conda install -c free bsddb&lt;br /&gt;
 conda install -c rdkit rdkit&lt;br /&gt;
 conda install numpy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Running DOCK-3.7 with Slurm&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is a “guinea pig project”, which has been done with DOCK-3.7 locally.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;GPR40 example:&#039;&#039;&#039; /mnt/nfs/home/dudenko/TEST_DOCKING_PROJECT&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;ChEMBL ligands:&#039;&#039;&#039; /mnt/nfs/home/dudenko/CHEMBL4422_active_ligands&amp;lt;br&amp;gt;&lt;br /&gt;
This test calculation should run smoothly, if not, then there is a problem.&amp;lt;br&amp;gt;&lt;br /&gt;
Ultimately, you may need to compare your results with the reference run:&lt;br /&gt;
* CHEMBL4422_active_ligands.mol2 - TOP500 scoring poses &lt;br /&gt;
* extract_all.sort.uniq.txt - a print-out of scoring details&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slurm queue manager is installed locally at gimel, use it to run this test (and all your future jobs) in parallel.&amp;lt;br&amp;gt;&lt;br /&gt;
Do not forget to set DOCKBASE variable: &#039;&#039;export DOCKBASE=/nfs/soft/dock/versions/dock37/DOCK-3.7.3rc1/&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful DOCKING commands to remind:&#039;&#039;&#039;&lt;br /&gt;
 $DOCKBASE/docking/setup/setup_db2_zinc15_file_number.py ./ CHEMBL4422_active_ligands_ CHEMBL4422_active_ligands.sdi 100 count&lt;br /&gt;
 $DOCKBASE/analysis/extract_all.py -s -20&lt;br /&gt;
 $DOCKBASE/analysis/getposes.py -l 500 -o CHEMBL4422_active_ligands.mol2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful slurm commands (see https://slurm.schedmd.com/quickstart.html):&#039;&#039;&#039;&lt;br /&gt;
 to see what machine resources are offered by the cluster, do &#039;&#039;sinfo -lNe&#039;&#039;&lt;br /&gt;
 to submit a DOCK-3.7 job, run &#039;&#039;$DOCKBASE/docking/submit/submit_slurm_array.csh&#039;&#039;&lt;br /&gt;
 to see what is happening in the queue, run &#039;&#039;squeue&#039;&#039;&lt;br /&gt;
 to see a detailed info for a specific job: &#039;&#039;scontrol show jobid=_JOBID_&#039;&#039;&lt;br /&gt;
 to delete a job from queue, run &#039;&#039;scancel _JOBID_&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Should your slurm run correctly, type &#039;&#039;squeue&#039;&#039; and you should see something like this:&lt;br /&gt;
&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   4187_[637-2091]     gimel array_jo  dudenko PD       0:00      1 (Resources)&lt;br /&gt;
          4187_629     gimel array_jo  dudenko  R       0:00      1 n-1-20&lt;br /&gt;
          4187_630     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_631     gimel array_jo  dudenko  R       0:00      1 n-1-21&lt;br /&gt;
          4187_632     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_633     gimel array_jo  dudenko  R       0:00      1 n-1-21&lt;br /&gt;
          4187_634     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_635     gimel array_jo  dudenko  R       0:00      1 n-5-35&lt;br /&gt;
          4187_636     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_622     gimel array_jo  dudenko  R       0:01      1 n-5-34&lt;br /&gt;
          4187_623     gimel array_jo  dudenko  R       0:01      1 n-5-34&lt;br /&gt;
          4187_624     gimel array_jo  dudenko  R       0:01      1 n-5-35&lt;br /&gt;
          4187_625     gimel array_jo  dudenko  R       0:01      1 n-1-17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As root at gimel, it is possible to modify a particular job, e.g., &#039;&#039;scontrol update jobid=635 TimeLimit=7-00:00:00&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Slurm Installation Guide ===&lt;br /&gt;
&#039;&#039;&#039;Detailed step-by-step installation instruction (for sysadmins only)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Setup Node master ====&lt;br /&gt;
&#039;&#039;&#039;TBA&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Setup Compute Nodes ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful links:&#039;&#039;&#039;&lt;br /&gt;
 https://slurm.schedmd.com/quickstart_admin.html&lt;br /&gt;
 https://wiki.fysik.dtu.dk/niflheim/Slurm_installation&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;node n-1-17&#039;&#039;&#039; (Installation of the latest slurm version (20.02.04)? see below in &amp;quot;Migrating to gimel5&amp;quot; section)&lt;br /&gt;
&lt;br /&gt;
* make sure you have there Centos 7: &#039;&#039;cat /etc/redhat-release&#039;&#039;&lt;br /&gt;
* &#039;&#039;wget https://download.schedmd.com/slurm/slurm-17.02.11.tar.bz2&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install readline-devel perl-ExtUtils-MakeMaker.noarch munge-devel pam-devel openssl-devel&#039;&#039;&lt;br /&gt;
* &#039;&#039;export VER=17.02.11; rpmbuild -ta slurm-$VER.tar.bz2 --without mysql; mv /root/rpmbuild .&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
installing built packages from rpmbuild:&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-plugins-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-munge-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;setting up munge&#039;&#039;&#039;:&lt;br /&gt;
copy over /etc/munge/munge.key from gimel and put locally to /etc/munge. The key should be identical allover the nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
Munge is a daemon responsible for secure data exchange between nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
Set permissions accordingly: &#039;&#039;chown munge:munge /etc/munge/munge.key; chmod 400 /etc/munge/munge.key&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;starting munge&#039;&#039;&#039;: &#039;&#039;systemctl enable munge; systemctl start munge&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;setting up slurm&#039;&#039;&#039;:&lt;br /&gt;
* create a user slurm: adduser slurm.&lt;br /&gt;
* all UID/GUIDs of slurm user should be identical allover the nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
  Otherwise, one needs to specify a mapping scheme for translating each UID/GUIDs between nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
  To edit slurm UID/GUID, do &#039;&#039;vipw&#039;&#039; and replace &amp;quot;slurm line&amp;quot; with slurm:x:XXXXX:YYYYY::/nonexistent:/bin/false&amp;lt;br&amp;gt;&lt;br /&gt;
  XXXXX and YYYYY for slurm user can be found at gimel in /etc/passwd&amp;lt;br&amp;gt;&lt;br /&gt;
  NB: don&#039;t forget to edit /etc/group as well.&amp;lt;br&amp;gt;&lt;br /&gt;
* copy /etc/slurm/slurm.conf from gimel and put locally to /etc/slurm.&lt;br /&gt;
* figure out what CPU/Memory resources you have at n-1-17 (see /proc/cpuinfo) and append the following line:&lt;br /&gt;
  NodeName=n-1-17 NodeAddr=10.20.1.17 CPUs=24 State=UNKNOWN&lt;br /&gt;
* append n-1-17 to the partition: PartitionName=gimel Nodes=gimel,n-5-34,n-5-35,n-1-17 Default=YES MaxTime=INFINITE State=UP&lt;br /&gt;
* create the following folders:&lt;br /&gt;
  &#039;&#039;mkdir -p /var/spool/slurm-llnl /var/run/slurm-llnl /var/log/slurm-llnl&#039;&#039;&lt;br /&gt;
  &#039;&#039;chown -R slurm:slurm /var/spool/slurm-llnl /var/run/slurm-llnl /var/log/slurm-llnl&#039;&#039;&lt;br /&gt;
* restarting slurm master node at gimel (Centos 6): &#039;&#039;/etc/init.d/slurm restart&#039;&#039;&lt;br /&gt;
* enabling and starting slurm computing nodes (Centos 7): &#039;&#039;systemctl enable slurmd; systemctl start slurmd&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
And last but not least, asking the firewall to allow communication between master node and computing node n-1-17:&lt;br /&gt;
* &#039;&#039;firewall-cmd --permanent --zone=public --add-port=6817/tcp&#039;&#039;   #slurmctld&lt;br /&gt;
* &#039;&#039;firewall-cmd --permanent --zone=public --add-port=6818/tcp&#039;&#039;   #slurmd&lt;br /&gt;
* &#039;&#039;firewall-cmd --reload&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see the current situation of the queue, so &#039;&#039;sinfo -lNe&#039;&#039; and you will see:&lt;br /&gt;
 Wed May 27 09:49:54 2020&lt;br /&gt;
 NODELIST   NODES PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON              &lt;br /&gt;
 gimel          1    gimel*     drained   24    4:6:1      1        0      1   (null) none                &lt;br /&gt;
 n-1-17         1    gimel*        idle   24   24:1:1      1        0      1   (null) none                &lt;br /&gt;
 n-5-34         1    gimel*        idle   80   80:1:1      1        0      1   (null) none                &lt;br /&gt;
 n-5-35         1    gimel*        idle   80   80:1:1      1        0      1   (null) none&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To disable a specific node, do &#039;&#039;scontrol update NodeName=n-1-17 State=DRAIN Reason=DRAINED&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
To return back to service, do &#039;&#039;scontrol update NodeName=n-1-17 State=RESUME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
p.s. Some users/scripts may require csh/tcsh.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;sudo yum install csh tcsh&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Node down after reboot&#039;&#039;&#039;&lt;br /&gt;
On gimel (master node)&lt;br /&gt;
 sudo scontrol update NodeName=&amp;lt;node_name&amp;gt; State=RESUME&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful links:&#039;&#039;&#039;&lt;br /&gt;
 https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Migrating to gimel5 ====&lt;br /&gt;
Reminder: If a GPU Node, add nvidia packages in foreman&#039;s puppet classes.&lt;br /&gt;
* Make sure you have there Centos 7: &#039;&#039;cat /etc/redhat-release&#039;&#039;&lt;br /&gt;
* &#039;&#039;wget https://download.schedmd.com/slurm/slurm-20.02.4.tar.bz2&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpm-build gcc openssl openssl-devel libssh2-devel pam-devel numactl numactl-devel hwloc hwloc-devel lua lua-devel readline-devel rrdtool-devel ncurses-devel gtk2-devel libssh2-devel libibmad libibumad perl-Switch perl-ExtUtils-MakeMaker mysql-devel&#039;&#039;&lt;br /&gt;
* &#039;&#039;export VER=20.02.4; rpmbuild -ta slurm-$VER.tar.bz2 --with mysql; mv /root/rpmbuild .&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Installation on a Compute Node&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmd-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl enable slurmd&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl start slurmd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Installation for a Backup Controller&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Currently (04/05/2021): gimel4&lt;br /&gt;
&lt;br /&gt;
* In gimel5&#039;s /etc/slurm/slurm.conf, find &amp;quot;BackupController=&amp;quot;&lt;br /&gt;
** Set the value to &#039;&#039;&#039; gimel4 &#039;&#039;&#039;&lt;br /&gt;
** Copy the conf file to gimel4&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmd-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmctld-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl enable slurmctld.service&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl start slurmctld.service&lt;br /&gt;
&lt;br /&gt;
==== GPUs specification ====&lt;br /&gt;
&lt;br /&gt;
     - 32-core:&lt;br /&gt;
                + n-9-34 (GTX 1080 Ti)&lt;br /&gt;
                + n-9-36 (GTX 1080 Ti)&lt;br /&gt;
                + n-1-126 (GTX 980)&lt;br /&gt;
                + n-1-141 (GTX 980)&lt;br /&gt;
    - 40-core:&lt;br /&gt;
                + n-1-28 (RTX 2080 Super)&lt;br /&gt;
                + n-1-38 (RTX 2080 Super)&lt;br /&gt;
                + n-1-101 (RTX 2080 Super)&lt;br /&gt;
                + n-1-105 (RTX 2080 Super)&lt;br /&gt;
                + n-1-124 (RTX 2080 Super)&lt;br /&gt;
&lt;br /&gt;
Back to [[DOCK_3.7]]&lt;br /&gt;
&lt;br /&gt;
[[Category : Slurm]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Slurm&amp;diff=13469</id>
		<title>Slurm</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Slurm&amp;diff=13469"/>
		<updated>2021-04-05T21:20:34Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Slurm user-guide&#039;&#039;&#039;&lt;br /&gt;
=== Submit Jobs with Slurm ===&lt;br /&gt;
&lt;br /&gt;
==== SBATCH-MR (beta) ====&lt;br /&gt;
It is a slurm-version of qsub-mr for submitting job on Slurm queueing system. Note: this is have not been extensively tested yet. Please contact me if the script is not working out. We are hoping to fully migrate to Slurm from the out-dated SGE system. Any error report would be helpful - Khanh&lt;br /&gt;
&lt;br /&gt;
New slurm scripts are located in /nfs/soft/tools/utils/sbatch-slice&lt;br /&gt;
 Just simply replace /nfs/soft/tools/utils/qsub-slice/qsub-mr with /nfs/soft/tools/utils/sbatch-slice/sbatch-mr in your script&lt;br /&gt;
&lt;br /&gt;
To check the status of your job:&lt;br /&gt;
 By username&lt;br /&gt;
  $ squeue -u &amp;lt;username&amp;gt;&lt;br /&gt;
 By jobid&lt;br /&gt;
  $ squeue -j &amp;lt;job_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submit load2d Jobs ====&lt;br /&gt;
&lt;br /&gt;
 $ cd &amp;lt;catalog_shortname&amp;gt;&lt;br /&gt;
 $ source /nfs/exa/work/khtang/ZINC21_load2d/loadenv_zinc21.sh&lt;br /&gt;
 (development) $ sh /nfs/exa/work/khtang/submit_scripts/sbatch_slice/batch_zinc21.slurm &amp;lt;catalog_shortname&amp;gt;.ism&lt;br /&gt;
&lt;br /&gt;
==== Submit DOCK Jobs ====&lt;br /&gt;
&lt;br /&gt;
* ANACONDA Installation (Python 2.7)&lt;br /&gt;
&lt;br /&gt;
Each user is welcome to download anaconda and install into his/her own folder&amp;lt;br&amp;gt;&lt;br /&gt;
https://www.anaconda.com/distribution/&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;wget https://repo.anaconda.com/archive/Anaconda2-2019.10-Linux-x86_64.sh&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
NB: It is also available for Python3, which is our nearest future&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
simple installation via &#039;&#039;/bin/sh Anaconda2-2019.10-Linux-x86_64.sh&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
After the installation is completed, you need to install a few packages:&lt;br /&gt;
 conda install -c free bsddb&lt;br /&gt;
 conda install -c rdkit rdkit&lt;br /&gt;
 conda install numpy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Running DOCK-3.7 with Slurm&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Here is a “guinea pig project”, which has been done with DOCK-3.7 locally.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;GPR40 example:&#039;&#039;&#039; /mnt/nfs/home/dudenko/TEST_DOCKING_PROJECT&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;ChEMBL ligands:&#039;&#039;&#039; /mnt/nfs/home/dudenko/CHEMBL4422_active_ligands&amp;lt;br&amp;gt;&lt;br /&gt;
This test calculation should run smoothly, if not, then there is a problem.&amp;lt;br&amp;gt;&lt;br /&gt;
Ultimately, you may need to compare your results with the reference run:&lt;br /&gt;
* CHEMBL4422_active_ligands.mol2 - TOP500 scoring poses &lt;br /&gt;
* extract_all.sort.uniq.txt - a print-out of scoring details&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Slurm queue manager is installed locally at gimel, use it to run this test (and all your future jobs) in parallel.&amp;lt;br&amp;gt;&lt;br /&gt;
Do not forget to set DOCKBASE variable: &#039;&#039;export DOCKBASE=/nfs/soft/dock/versions/dock37/DOCK-3.7.3rc1/&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful DOCKING commands to remind:&#039;&#039;&#039;&lt;br /&gt;
 $DOCKBASE/docking/setup/setup_db2_zinc15_file_number.py ./ CHEMBL4422_active_ligands_ CHEMBL4422_active_ligands.sdi 100 count&lt;br /&gt;
 $DOCKBASE/analysis/extract_all.py -s -20&lt;br /&gt;
 $DOCKBASE/analysis/getposes.py -l 500 -o CHEMBL4422_active_ligands.mol2&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful slurm commands (see https://slurm.schedmd.com/quickstart.html):&#039;&#039;&#039;&lt;br /&gt;
 to see what machine resources are offered by the cluster, do &#039;&#039;sinfo -lNe&#039;&#039;&lt;br /&gt;
 to submit a DOCK-3.7 job, run &#039;&#039;$DOCKBASE/docking/submit/submit_slurm_array.csh&#039;&#039;&lt;br /&gt;
 to see what is happening in the queue, run &#039;&#039;squeue&#039;&#039;&lt;br /&gt;
 to see a detailed info for a specific job: &#039;&#039;scontrol show jobid=_JOBID_&#039;&#039;&lt;br /&gt;
 to delete a job from queue, run &#039;&#039;scancel _JOBID_&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Should your slurm run correctly, type &#039;&#039;squeue&#039;&#039; and you should see something like this:&lt;br /&gt;
&lt;br /&gt;
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   4187_[637-2091]     gimel array_jo  dudenko PD       0:00      1 (Resources)&lt;br /&gt;
          4187_629     gimel array_jo  dudenko  R       0:00      1 n-1-20&lt;br /&gt;
          4187_630     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_631     gimel array_jo  dudenko  R       0:00      1 n-1-21&lt;br /&gt;
          4187_632     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_633     gimel array_jo  dudenko  R       0:00      1 n-1-21&lt;br /&gt;
          4187_634     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_635     gimel array_jo  dudenko  R       0:00      1 n-5-35&lt;br /&gt;
          4187_636     gimel array_jo  dudenko  R       0:00      1 n-5-34&lt;br /&gt;
          4187_622     gimel array_jo  dudenko  R       0:01      1 n-5-34&lt;br /&gt;
          4187_623     gimel array_jo  dudenko  R       0:01      1 n-5-34&lt;br /&gt;
          4187_624     gimel array_jo  dudenko  R       0:01      1 n-5-35&lt;br /&gt;
          4187_625     gimel array_jo  dudenko  R       0:01      1 n-1-17&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As root at gimel, it is possible to modify a particular job, e.g., &#039;&#039;scontrol update jobid=635 TimeLimit=7-00:00:00&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Slurm Installation Guide ===&lt;br /&gt;
&#039;&#039;&#039;Detailed step-by-step installation instruction (for sysadmins only)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Setup Node master ====&lt;br /&gt;
&#039;&#039;&#039;TBA&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Setup Compute Nodes ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful links:&#039;&#039;&#039;&lt;br /&gt;
 https://slurm.schedmd.com/quickstart_admin.html&lt;br /&gt;
 https://wiki.fysik.dtu.dk/niflheim/Slurm_installation&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;node n-1-17&#039;&#039;&#039; (Installation of the latest slurm version (20.02.04)? see below in &amp;quot;Migrating to gimel5&amp;quot; section)&lt;br /&gt;
&lt;br /&gt;
* make sure you have there Centos 7: &#039;&#039;cat /etc/redhat-release&#039;&#039;&lt;br /&gt;
* &#039;&#039;wget https://download.schedmd.com/slurm/slurm-17.02.11.tar.bz2&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install readline-devel perl-ExtUtils-MakeMaker.noarch munge-devel pam-devel openssl-devel&#039;&#039;&lt;br /&gt;
* &#039;&#039;export VER=17.02.11; rpmbuild -ta slurm-$VER.tar.bz2 --without mysql; mv /root/rpmbuild .&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
installing built packages from rpmbuild:&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-plugins-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-munge-17.02.11-1.el7.x86_64.rpm&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;setting up munge&#039;&#039;&#039;:&lt;br /&gt;
copy over /etc/munge/munge.key from gimel and put locally to /etc/munge. The key should be identical allover the nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
Munge is a daemon responsible for secure data exchange between nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
Set permissions accordingly: &#039;&#039;chown munge:munge /etc/munge/munge.key; chmod 400 /etc/munge/munge.key&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;starting munge&#039;&#039;&#039;: &#039;&#039;systemctl enable munge; systemctl start munge&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;setting up slurm&#039;&#039;&#039;:&lt;br /&gt;
* create a user slurm: adduser slurm.&lt;br /&gt;
* all UID/GUIDs of slurm user should be identical allover the nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
  Otherwise, one needs to specify a mapping scheme for translating each UID/GUIDs between nodes.&amp;lt;br&amp;gt;&lt;br /&gt;
  To edit slurm UID/GUID, do &#039;&#039;vipw&#039;&#039; and replace &amp;quot;slurm line&amp;quot; with slurm:x:XXXXX:YYYYY::/nonexistent:/bin/false&amp;lt;br&amp;gt;&lt;br /&gt;
  XXXXX and YYYYY for slurm user can be found at gimel in /etc/passwd&amp;lt;br&amp;gt;&lt;br /&gt;
  NB: don&#039;t forget to edit /etc/group as well.&amp;lt;br&amp;gt;&lt;br /&gt;
* copy /etc/slurm/slurm.conf from gimel and put locally to /etc/slurm.&lt;br /&gt;
* figure out what CPU/Memory resources you have at n-1-17 (see /proc/cpuinfo) and append the following line:&lt;br /&gt;
  NodeName=n-1-17 NodeAddr=10.20.1.17 CPUs=24 State=UNKNOWN&lt;br /&gt;
* append n-1-17 to the partition: PartitionName=gimel Nodes=gimel,n-5-34,n-5-35,n-1-17 Default=YES MaxTime=INFINITE State=UP&lt;br /&gt;
* create the following folders:&lt;br /&gt;
  &#039;&#039;mkdir -p /var/spool/slurm-llnl /var/run/slurm-llnl /var/log/slurm-llnl&#039;&#039;&lt;br /&gt;
  &#039;&#039;chown -R slurm:slurm /var/spool/slurm-llnl /var/run/slurm-llnl /var/log/slurm-llnl&#039;&#039;&lt;br /&gt;
* restarting slurm master node at gimel (Centos 6): &#039;&#039;/etc/init.d/slurm restart&#039;&#039;&lt;br /&gt;
* enabling and starting slurm computing nodes (Centos 7): &#039;&#039;systemctl enable slurmd; systemctl start slurmd&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
And last but not least, asking the firewall to allow communication between master node and computing node n-1-17:&lt;br /&gt;
* &#039;&#039;firewall-cmd --permanent --zone=public --add-port=6817/tcp&#039;&#039;   #slurmctld&lt;br /&gt;
* &#039;&#039;firewall-cmd --permanent --zone=public --add-port=6818/tcp&#039;&#039;   #slurmd&lt;br /&gt;
* &#039;&#039;firewall-cmd --reload&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see the current situation of the queue, so &#039;&#039;sinfo -lNe&#039;&#039; and you will see:&lt;br /&gt;
 Wed May 27 09:49:54 2020&lt;br /&gt;
 NODELIST   NODES PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON              &lt;br /&gt;
 gimel          1    gimel*     drained   24    4:6:1      1        0      1   (null) none                &lt;br /&gt;
 n-1-17         1    gimel*        idle   24   24:1:1      1        0      1   (null) none                &lt;br /&gt;
 n-5-34         1    gimel*        idle   80   80:1:1      1        0      1   (null) none                &lt;br /&gt;
 n-5-35         1    gimel*        idle   80   80:1:1      1        0      1   (null) none&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To disable a specific node, do &#039;&#039;scontrol update NodeName=n-1-17 State=DRAIN Reason=DRAINED&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
To return back to service, do &#039;&#039;scontrol update NodeName=n-1-17 State=RESUME&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
p.s. Some users/scripts may require csh/tcsh.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;sudo yum install csh tcsh&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Node down after reboot&#039;&#039;&#039;&lt;br /&gt;
On gimel (master node)&lt;br /&gt;
 sudo scontrol update NodeName=&amp;lt;node_name&amp;gt; State=RESUME&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Useful links:&#039;&#039;&#039;&lt;br /&gt;
 https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Migrating to gimel5 ====&lt;br /&gt;
Reminder: If a GPU Node, add nvidia packages in foreman&#039;s puppet classes.&lt;br /&gt;
* Make sure you have there Centos 7: &#039;&#039;cat /etc/redhat-release&#039;&#039;&lt;br /&gt;
* &#039;&#039;wget https://download.schedmd.com/slurm/slurm-20.02.4.tar.bz2&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpm-build gcc openssl openssl-devel libssh2-devel pam-devel numactl numactl-devel hwloc hwloc-devel lua lua-devel readline-devel rrdtool-devel ncurses-devel gtk2-devel libssh2-devel libibmad libibumad perl-Switch perl-ExtUtils-MakeMaker mysql-devel&#039;&#039;&lt;br /&gt;
* &#039;&#039;export VER=20.02.4; rpmbuild -ta slurm-$VER.tar.bz2 --with mysql; mv /root/rpmbuild .&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Installation on a Compute Node&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmd-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl enable slurmd&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl start slurmd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Installation for a Backup Controller&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Currently (04/05/2021): gimel4&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmd-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;yum install rpmbuild/RPMS/x86_64/slurm-slurmctld-20.02.4-1.el7.x86_64.rpm&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl enable slurmctld.service&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;systemctl start slurmctld.service&lt;br /&gt;
&lt;br /&gt;
==== GPUs specification ====&lt;br /&gt;
&lt;br /&gt;
     - 32-core:&lt;br /&gt;
                + n-9-34 (GTX 1080 Ti)&lt;br /&gt;
                + n-9-36 (GTX 1080 Ti)&lt;br /&gt;
                + n-1-126 (GTX 980)&lt;br /&gt;
                + n-1-141 (GTX 980)&lt;br /&gt;
    - 40-core:&lt;br /&gt;
                + n-1-28 (RTX 2080 Super)&lt;br /&gt;
                + n-1-38 (RTX 2080 Super)&lt;br /&gt;
                + n-1-101 (RTX 2080 Super)&lt;br /&gt;
                + n-1-105 (RTX 2080 Super)&lt;br /&gt;
                + n-1-124 (RTX 2080 Super)&lt;br /&gt;
&lt;br /&gt;
Back to [[DOCK_3.7]]&lt;br /&gt;
&lt;br /&gt;
[[Category : Slurm]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13461</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13461"/>
		<updated>2021-04-01T20:35:36Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
== Mount storage disks using LVM ==&lt;br /&gt;
Reference guide: &#039;&#039;&#039;https://www.thegeekdiary.com/redhat-centos-a-beginners-guide-to-lvm-logical-volume-manager/&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Scan devices to be used as physical volumes (PV)&lt;br /&gt;
 lvmdiskscan&lt;br /&gt;
&lt;br /&gt;
2. Initialize the block devices&lt;br /&gt;
 pvcreate &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
 example: pvcreate /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
3. To double check PV&lt;br /&gt;
 pvdisplay&lt;br /&gt;
 pvscan&lt;br /&gt;
 pvs&lt;br /&gt;
&lt;br /&gt;
4. After PV, create a Volume Group (VG)&lt;br /&gt;
 vgcreate &amp;lt;name of volume&amp;gt; &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
 example: vgcreate soft2 /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
5. To double check VG&lt;br /&gt;
 vgs &amp;lt;VG name&amp;gt;&lt;br /&gt;
 vgdisplay &amp;lt;VG name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. After VG, create a Logical Volume (LV)&lt;br /&gt;
 lvcreate &amp;lt;options&amp;gt; &amp;lt;name of LV&amp;gt; &amp;lt;VG Name&amp;gt;&lt;br /&gt;
 example: lvcreate -l 100%FREE -n soft_lv soft2 &lt;br /&gt;
 &amp;quot;-l&amp;quot; is for storage space in percent, 100%FREE means use all space in VG&lt;br /&gt;
 &amp;quot;-n&amp;quot; is for name of LV&lt;br /&gt;
&lt;br /&gt;
7. Double Check LV&lt;br /&gt;
 lvs /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
 lvdisplay /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
 lvscan&lt;br /&gt;
&lt;br /&gt;
8. Create a File System&lt;br /&gt;
 mkfs.ext4 &amp;lt;options&amp;gt; /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
 example: mkfs.ext4 /dev/soft2/soft_lv&lt;br /&gt;
&lt;br /&gt;
9. Make the directory:&lt;br /&gt;
 mkdir &amp;lt;somewhere&amp;gt;&lt;br /&gt;
 example: mkdir /export/soft2&lt;br /&gt;
&lt;br /&gt;
10. Mount permanently:&lt;br /&gt;
 vim /etc/fstab&lt;br /&gt;
 /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;      &amp;lt;somewhere&amp;gt;       &amp;lt;type of file system&amp;gt;      &amp;lt;mount options&amp;gt;      &amp;lt;dump&amp;gt;         &amp;lt;fsck&amp;gt;&lt;br /&gt;
 example:&lt;br /&gt;
 /dev/soft2/soft_lv	/export/soft2	ext4	defaults	1	2&lt;br /&gt;
&lt;br /&gt;
11. Lastly, mount all&lt;br /&gt;
 mount -a&lt;br /&gt;
&lt;br /&gt;
== RAID1 with LVM on fresh drives ==&lt;br /&gt;
1. Check which disk will act as the mirror&lt;br /&gt;
 fdisk -l&lt;br /&gt;
&lt;br /&gt;
2. Create physical volumes of the drives&lt;br /&gt;
 pvcreate &amp;lt;disk&amp;gt; &amp;lt;disk&amp;gt; &amp;lt;disk&amp;gt;&lt;br /&gt;
 example: pvcreate /dev/sdz /dev/sdaa&lt;br /&gt;
&lt;br /&gt;
3. Create a volume group for these drives&lt;br /&gt;
 vgcreate &amp;lt;name&amp;gt; &amp;lt;disk1&amp;gt; &amp;lt;disk2&amp;gt;&lt;br /&gt;
 example: vgcreate local2 /dev/sdz /dev/sdaa&lt;br /&gt;
&lt;br /&gt;
4. Create a logical volume with &amp;quot;-m1&amp;quot; flag to say mirror/raid1 &lt;br /&gt;
 lvcreate -l &amp;lt;how much of disk to use in percent&amp;gt; -m1 -n &amp;lt;name&amp;gt; &amp;lt;volume_group&amp;gt;&lt;br /&gt;
 example: lvcreate -l 100%FREE -m1 -n local2_lv local2&lt;br /&gt;
&lt;br /&gt;
5. Mount permanently:&lt;br /&gt;
 vim /etc/fstab&lt;br /&gt;
 /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;      &amp;lt;somewhere&amp;gt;       &amp;lt;type of file system&amp;gt;      &amp;lt;mount options&amp;gt;      &amp;lt;dump&amp;gt;         &amp;lt;fsck&amp;gt;&lt;br /&gt;
 example:&lt;br /&gt;
 /dev/local2/local2_lv   /local2         ext4    defaults        1       2&lt;br /&gt;
&lt;br /&gt;
6. Lastly, mount all&lt;br /&gt;
 mount -a&lt;br /&gt;
&lt;br /&gt;
== RAID1 with LVM on an existing used logical volume ==&lt;br /&gt;
&#039;&#039;&#039;Before starting, make sure that the disk that will mirror data is the same size or bigger than the drive being mirrored.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Check if the volume has free space for lvm logs. Mirror will fail if no free space available.&lt;br /&gt;
 pvscan&lt;br /&gt;
 example output:&lt;br /&gt;
  PV /dev/sda2   VG centos          lvm2 [&amp;lt;222.57 GiB / 0    free]&lt;br /&gt;
  PV /dev/sdb1   VG centos          lvm2 [&amp;lt;223.57 GiB / 0    free]&lt;br /&gt;
  PV /dev/sdd1   VG local2          lvm2 [&amp;lt;7.28 TiB / &amp;lt;5.03 GiB free]&lt;br /&gt;
If &#039;&#039;&#039; PV /dev/sdd1   VG local2          lvm2 [&amp;lt;7.28 TiB / 0 free] &#039;&#039;&#039; then do&lt;br /&gt;
 lvreduce -l -1 --resizefs /dev/&amp;lt;volume_group&amp;gt;/&amp;lt;logical_volume&amp;gt;&lt;br /&gt;
 example : lvreduce -l -1 --resizefs /dev/local2/zinc_lv&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Check which disk will act as the mirror&lt;br /&gt;
 fdisk -l&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Extend the disk to volume group&lt;br /&gt;
 vgextend &amp;lt;volume_group&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 example: vgextend local2 /dev/sdc&lt;br /&gt;
To remove extension:&lt;br /&gt;
 vgreduce &amp;lt;volume_group&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 example: vgreduce local2 /dev/sdc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Convert current working logical volume to RAID1 and assign mirror&lt;br /&gt;
 lvconvert -m1 /dev/&amp;lt;volume_group&amp;gt;/&amp;lt;logical_volume&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 example: lvconvert -m1 /dev/local2/zinc_lv /dev/sdc&lt;br /&gt;
&lt;br /&gt;
5. It should say it was successful, to check syncing progress&lt;br /&gt;
 lvs -a -o +devices&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13410</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13410"/>
		<updated>2021-03-26T23:13:05Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Partition Disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mount storage disks using LVM ==&lt;br /&gt;
Reference guide: &#039;&#039;&#039;https://www.thegeekdiary.com/redhat-centos-a-beginners-guide-to-lvm-logical-volume-manager/&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Scan devices to be used as physical volumes (PV)&#039;&#039;&#039;&lt;br /&gt;
* lvmdiskscan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Initialize the block devices&#039;&#039;&#039;&lt;br /&gt;
* pvcreate &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
** Example: pvcreate /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To double check PV&#039;&#039;&#039;&lt;br /&gt;
* pvdisplay&lt;br /&gt;
* pvscan&lt;br /&gt;
* pvs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;After PV, create a Volume Group (VG)&#039;&#039;&#039;&lt;br /&gt;
* vgcreate &amp;lt;name of volume&amp;gt; &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
** Example: vgcreate soft2 /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To double check VG&#039;&#039;&#039;&lt;br /&gt;
* vgs &amp;lt;VG name&amp;gt;&lt;br /&gt;
* vgdisplay &amp;lt;VG name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; After VG, create a Logical Volume (LV) &#039;&#039;&#039;&lt;br /&gt;
* lvcreate &amp;lt;options&amp;gt; &amp;lt;name of LV&amp;gt; &amp;lt;VG Name&amp;gt;&lt;br /&gt;
** Example: lvcreate -l 100%FREE -n soft_lv soft2 &lt;br /&gt;
***-l is for storage space in percent, 100%FREE means use all space in VG&lt;br /&gt;
***-n is for name of LV&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Double Check LV &#039;&#039;&#039;&lt;br /&gt;
* lvs /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* lvdisplay /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* lvscan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Last step, create a File System&#039;&#039;&#039;&lt;br /&gt;
* mkfs.ext4 &amp;lt;options&amp;gt; /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* Example: mkfs.ext4 /dev/soft2/soft_lv&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; It&#039;s time to mount &#039;&#039;&#039;&lt;br /&gt;
Make the directory first:&lt;br /&gt;
* mkdir &amp;lt;somewhere&amp;gt;&lt;br /&gt;
** Example: mkdir /export/soft2&lt;br /&gt;
* mount /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt; &amp;lt;somewhere&amp;gt;&lt;br /&gt;
** Example: mount /dev/soft2/soft_lv /export/soft2/&lt;br /&gt;
Make it permanent by editing fstab:&lt;br /&gt;
* vim /etc/fstab&lt;br /&gt;
 /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;      &amp;lt;somewhere&amp;gt;       &amp;lt;type of file system&amp;gt;      &amp;lt;mount options&amp;gt;      &amp;lt;dump&amp;gt;         &amp;lt;fsck&amp;gt;&lt;br /&gt;
 example:&lt;br /&gt;
 /dev/soft2/soft_lv	/export/soft2	ext4	defaults	1	2&lt;br /&gt;
&lt;br /&gt;
== RAID1 with LVM on an existing used logical volume ==&lt;br /&gt;
&#039;&#039;&#039;Before starting, make sure that the disk that will mirror data is the same size or bigger than the drive being mirrored.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Check if the volume has free space for lvm logs. Mirror will fail if no free space available.&lt;br /&gt;
 pvscan&lt;br /&gt;
 example output:&lt;br /&gt;
  PV /dev/sda2   VG centos          lvm2 [&amp;lt;222.57 GiB / 0    free]&lt;br /&gt;
  PV /dev/sdb1   VG centos          lvm2 [&amp;lt;223.57 GiB / 0    free]&lt;br /&gt;
  PV /dev/sdd1   VG local2          lvm2 [&amp;lt;7.28 TiB / &amp;lt;5.03 GiB free]&lt;br /&gt;
If &#039;&#039;&#039; PV /dev/sdd1   VG local2          lvm2 [&amp;lt;7.28 TiB / 0 free] &#039;&#039;&#039; then do&lt;br /&gt;
 lvreduce -l -1 --resizefs /dev/&amp;lt;volume_group&amp;gt;/&amp;lt;logical_volume&amp;gt;&lt;br /&gt;
 for example : lvreduce -l -1 --resizefs /dev/local2/zinc_lv&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Check which disk will act as the mirror&lt;br /&gt;
 fdisk -l&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
3. Extend the disk to volume group&lt;br /&gt;
 vgextend &amp;lt;volume_group&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 for example: vgextend local2 /dev/sdc&lt;br /&gt;
To remove extension:&lt;br /&gt;
 vgreduce &amp;lt;volume_group&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 for example: vgreduce local2 /dev/sdc&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4. Convert current working logical volume to RAID1 and assign mirror&lt;br /&gt;
 lvconvert -m1 /dev/&amp;lt;volume_group&amp;gt;/&amp;lt;logical_volume&amp;gt; /dev/&amp;lt;disk&amp;gt;&lt;br /&gt;
 for example: lvconvert -m1 /dev/local2/zinc_lv /dev/sdc&lt;br /&gt;
&lt;br /&gt;
5. It should say it was successful, to check syncing progress&lt;br /&gt;
 lvs -a -o +devices&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Zfs&amp;diff=13407</id>
		<title>Zfs</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Zfs&amp;diff=13407"/>
		<updated>2021-03-25T21:26:03Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Checking Disk Health and Integrity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ZFS - Zettabyte Filesystem&lt;br /&gt;
&lt;br /&gt;
== ZFS packages installation ==&lt;br /&gt;
https://www.symmcom.com/docs/how-tos/storages/how-to-install-zfs-on-centos-7&lt;br /&gt;
&lt;br /&gt;
ZFS rpm pkg must match the CentOS version install in the machine&lt;br /&gt;
* Check CentOS version&lt;br /&gt;
 $ cat /etc/centos-release&lt;br /&gt;
 CentOS Linux release 7.8.2003 (Core)&lt;br /&gt;
&lt;br /&gt;
* Install ZFS-release package. In this case, you will need to install the package for CentOS 7.8 version&lt;br /&gt;
 $ yum install http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm&lt;br /&gt;
 &lt;br /&gt;
* Edit /etc/yum.repos.d/zfs.repo&lt;br /&gt;
The ZFS package that we want to install is zfs-kmod&lt;br /&gt;
 $ vim /etc/yum.repos.d/zfs.repo&lt;br /&gt;
 There are 2 items to change&lt;br /&gt;
 [zfs]&lt;br /&gt;
 name=ZFS on Linux for EL7 - dkms&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/$basearch/&lt;br /&gt;
 enabled=1 -&amp;gt; &amp;lt;b&amp;gt;change to 0&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
 [zfs-kmod]&lt;br /&gt;
 name=ZFS on Linux for EL7 - kmod&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/kmod/$basearch/&lt;br /&gt;
 enabled=0 &amp;lt;b&amp;gt;change to 1&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
&lt;br /&gt;
* Install ZFS&lt;br /&gt;
 $ yum install zfs&lt;br /&gt;
&lt;br /&gt;
* Useful Commands to see if any disk are avaible&lt;br /&gt;
 $ lsblk -f &lt;br /&gt;
 $ fdisk -l &lt;br /&gt;
 $ fdisk -l  /dev/sd* |grep  &#039;^Disk&#039; |grep sectors |grep -v experimental&lt;br /&gt;
&lt;br /&gt;
== Beginning ZFS instances ==&lt;br /&gt;
&lt;br /&gt;
There are only two commmands to interact with ZFS.  &lt;br /&gt;
&lt;br /&gt;
 zpool: used to create a ZFS vdev (virtual device).  vdevs are composed of physical devices.  &lt;br /&gt;
 zfs: used to create/interact with a ZFS dataset.  ZFS datasets are akin to logical volumes&lt;br /&gt;
&lt;br /&gt;
 # zpool creation syntax&lt;br /&gt;
 zpool create &amp;lt;poolname&amp;gt; &amp;lt;vdev(s)&amp;gt; &lt;br /&gt;
 # Create a zpool of six raidz2 vdevs, each with six drives.  Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache.  (example commmand was run on qof) &lt;br /&gt;
 zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN&lt;br /&gt;
  [root@qof ~]# zpool status&lt;br /&gt;
  pool: ex9&lt;br /&gt;
  state: ONLINE&lt;br /&gt;
  scan: none requested&lt;br /&gt;
  config:&lt;br /&gt;
  NAME                                            STATE     READ WRITE CKSUM&lt;br /&gt;
  ex9                                             ONLINE       0     0     0&lt;br /&gt;
  raidz2-0                                      ONLINE       0     0     0&lt;br /&gt;
    sda                                         ONLINE       0     0     0&lt;br /&gt;
    sdb                                         ONLINE       0     0     0&lt;br /&gt;
    sdc                                         ONLINE       0     0     0&lt;br /&gt;
    sdd                                         ONLINE       0     0     0&lt;br /&gt;
    sde                                         ONLINE       0     0     0&lt;br /&gt;
    sdf                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-1                                      ONLINE       0     0     0&lt;br /&gt;
    sdg                                         ONLINE       0     0     0&lt;br /&gt;
    sdh                                         ONLINE       0     0     0&lt;br /&gt;
    sdi                                         ONLINE       0     0     0&lt;br /&gt;
    sdj                                         ONLINE       0     0     0&lt;br /&gt;
    sdk                                         ONLINE       0     0     0&lt;br /&gt;
    sdl                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-2                                      ONLINE       0     0     0&lt;br /&gt;
    sdm                                         ONLINE       0     0     0&lt;br /&gt;
    sdn                                         ONLINE       0     0     0&lt;br /&gt;
    sdo                                         ONLINE       0     0     0&lt;br /&gt;
    sdp                                         ONLINE       0     0     0&lt;br /&gt;
    sdq                                         ONLINE       0     0     0&lt;br /&gt;
    sdr                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-3                                      ONLINE       0     0     0&lt;br /&gt;
    sds                                         ONLINE       0     0     0&lt;br /&gt;
    sdt                                         ONLINE       0     0     0&lt;br /&gt;
    sdu                                         ONLINE       0     0     0&lt;br /&gt;
    sdv                                         ONLINE       0     0     0&lt;br /&gt;
    sdw                                         ONLINE       0     0     0&lt;br /&gt;
    sdx                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-4                                      ONLINE       0     0     0&lt;br /&gt;
    sdy                                         ONLINE       0     0     0&lt;br /&gt;
    sdz                                         ONLINE       0     0     0&lt;br /&gt;
    sdaa                                        ONLINE       0     0     0&lt;br /&gt;
    sdab                                        ONLINE       0     0     0&lt;br /&gt;
    sdac                                        ONLINE       0     0     0&lt;br /&gt;
    sdad                                        ONLINE       0     0     0&lt;br /&gt;
  raidz2-5                                      ONLINE       0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    sdah                                        ONLINE       0     0     0&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
  logs&lt;br /&gt;
  mirror-6                                      ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN  ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN  ONLINE       0     0     0&lt;br /&gt;
  cache&lt;br /&gt;
  ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN    ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Adding a zfs filesystem: &lt;br /&gt;
&lt;br /&gt;
Using qof as an example, I will create a child filesystem under ex9 named archive that will be mounted under /export/ex9/archive.  This archive will be used to backup user data.&lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 -bash-4.2$ sudo zfs create -o mountpoint=/export/ex9/archive ex9/archive &lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 ex9/archive   192K   249T   192K  /export/ex9/archive&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
NOTE: It is important to reboot machine to see if the zpool stay on. &lt;br /&gt;
&lt;br /&gt;
== Add alias to machine and mount point to puppet ==&lt;br /&gt;
&lt;br /&gt;
Please see [http://wiki.docking.org/index.php/PuppetTricks#Adding_aliases_for_server_on_Alpha this section]&lt;br /&gt;
&lt;br /&gt;
== Adding L2ARC Read Cache to a zpool==&lt;br /&gt;
 # Look for available SSDs in /dev/disk/by-id/&lt;br /&gt;
 # Choose an available SSD to use for read cache.  Then decide which pool you want to put the cache on. &lt;br /&gt;
 Syntax: zpool add &amp;lt;zpool name&amp;gt; &amp;lt;cache/log&amp;gt; &amp;lt;path to disk&amp;gt;&lt;br /&gt;
 $ sudo zpool add ex6 cache /dev/disk/by-id/ata-INTEL_SSDSC2KG480G7_BTYM72830AV6480BGN&lt;br /&gt;
&lt;br /&gt;
== Tuning ZFS options ==&lt;br /&gt;
  # stores extended attributes as system attributes to improve performance&lt;br /&gt;
  $ zfs xattr=sa &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn on ZFS lz4 compression.  Use this for compressible dataset such as many files with text &lt;br /&gt;
  $ zfs set compression=lz4 &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn off access time for improved disk performance (so that the OS doesn&#039;t write a new time every time a file is accessed)&lt;br /&gt;
  $ zfs set atime=off &amp;lt;zfs dataset name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  NOTE: ZFS performance degrades tremendously when the zpool is over 80% used.  To avoid this, I have set a quota to 80% of the 248TB in qof/nfs-ex9.&lt;br /&gt;
  # To set a quota of 200TB on ZFS dataset:&lt;br /&gt;
  $ zfs set quota=200T &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  # To remove a quota from a ZFS dataset:&lt;br /&gt;
  $ zfs set quota=none &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, ZFS pools/mounts do not have ACLs active.  &lt;br /&gt;
  # to active access control lists on a zpool&lt;br /&gt;
  $ sudo zfs set acltype=posixacl &amp;lt;pool name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Checking Disk Health and Integrity ==&lt;br /&gt;
Print a brief summary of all pools:&lt;br /&gt;
 zpool list&lt;br /&gt;
&lt;br /&gt;
Print a detailed status of each disk and status of pool:&lt;br /&gt;
 zpool status&lt;br /&gt;
&lt;br /&gt;
Clear read errors on disk, if not anything serious:&lt;br /&gt;
 zpool clear &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:&lt;br /&gt;
 zpool scrub &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To stop scrub:&lt;br /&gt;
 zpool scrub -s &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount after reboot ==&lt;br /&gt;
 zfs set mountpoint=/export/db2 db2 &lt;br /&gt;
&lt;br /&gt;
== when you put in a new disk ==&lt;br /&gt;
 fdisk -l &lt;br /&gt;
to see what is new&lt;br /&gt;
&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zfs unmount db3&lt;br /&gt;
&lt;br /&gt;
 zfs mount db3&lt;br /&gt;
&lt;br /&gt;
= latest = &lt;br /&gt;
 zpool create -f db3 raidz2  /dev/sdy /dev/sdz  /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj&lt;br /&gt;
 zpool add -f db3 raidz2 /dev/sdak  /dev/sdal  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db4 raidz2 /dev/sdax /dev/sday /dev/sdaz /dev/sdba  /dev/sdbb  /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh  /dev/sdbi &lt;br /&gt;
 zpool add -f db4 raidz2 /dev/sdbj /dev/sdbk /dev/sdbl /dev/sdbm /dev/sdbn /dev/sdbo /dev/sdbp /dev/sdbq /dev/sdbr /dev/sdbs /dev/sdbt /dev/sdbu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Fri Jan 19 2018 = &lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2 /dev/sdbw /dev/sdbx /dev/sdby /dev/sdbz /dev/sdca  /dev/sdcb  /dev/sdcc  /dev/sdcd  /dev/sdce  /dev/sdcf  /dev/sdcg  /dev/sdch&lt;br /&gt;
 zpool add -f db5 raidz2 /dev/sdci /dev/sdcj /dev/sdck /dev/sdcl /dev/sdcm /dev/sdcn /dev/sdco /dev/sdcp /dev/sdcq /dev/sdcr /dev/sdcs /dev/sdct&lt;br /&gt;
 zfs mount db5&lt;br /&gt;
&lt;br /&gt;
= Wed Jan 24 2018 = &lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae&lt;br /&gt;
 zpool add -f ex1 mirror /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj&lt;br /&gt;
 zpool add -f ex2 /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt&lt;br /&gt;
 zpool add -f ex3 mirror /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal&lt;br /&gt;
 zpool add -f ex4 mirror /dev/sdam /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror  /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi mirror /dev/sdj /dev/sdk mirror /dev/sdl /dev/sdm mirror /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq mirror /dev/sdr /dev/sds mirro /dev/sdt /dev/sdu mirror /dev/sdv /dev/sdw mirror /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal  mirror /dev/sdam mirror /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On lamed&lt;br /&gt;
 zpool create -f ex5 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex6 mirror  /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf mirror /dev/sdg /dev/sdh mirror /dev/sdi /dev/sdj&lt;br /&gt;
 zpool create -f ex7 mirror  /dev/sdk /dev/sdl mirror /dev/sdm /dev/sdn mirror /dev/sdo /dev/sdp mirror /dev/sdq /dev/sdr mirror /dev/sds /dev/sdt&lt;br /&gt;
 zpool create -f ex8 mirror /dev/sdu /dev/sdv mirror /dev/sdw /dev/sdx mirror /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Sun Jan 19 2020 = &lt;br /&gt;
&lt;br /&gt;
on mem2,  sql system, note sda and sdc are system disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f sql1 raidz2  /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm&lt;br /&gt;
 zpool add     -f sql1 raidz2  /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
transform db4 on n-9-22 from z2 to z0&lt;br /&gt;
&lt;br /&gt;
 zpool destroy db4&lt;br /&gt;
 zpool create -f db4 raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
&lt;br /&gt;
zfs mount &lt;br /&gt;
== recovery from accidental pool destruction ==&lt;br /&gt;
 umount /mnt /mnt2&lt;br /&gt;
 mdadm -S /dev/md125/dev/md126/dev/md127&lt;br /&gt;
&lt;br /&gt;
 sfdisk -d /dev/sda &amp;lt; sda.sfdisk&lt;br /&gt;
 sfdisk -d /dev/sdb &amp;lt; sdb.sfdisk&lt;br /&gt;
 sfdisk /dev/sda &amp;lt; sdb.sfdisk&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 mdadm -A -R /dev/md127/dev/sdb2/dev/sda2&lt;br /&gt;
 mdadm /dev/md127 -a /dev/sda2&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 echo check &amp;gt; /sys/block/md127/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 mdadm -A -R /dev/md126/dev/sdb3/dev/sda3&lt;br /&gt;
 mdadm /dev/md126 -a /dev/sda3&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 echo check &amp;gt; /sys/block/md126/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
Also switched the bios to boot from hd2 instead of hd1 (or something)&lt;br /&gt;
&lt;br /&gt;
* Recreate zpool with correct drives&lt;br /&gt;
* Point an instance photorec at each of the wiped drives set to recover files of the following types: .gz, .solv (custom definition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE:  If you destroyed your zpool with command &#039;zpool destroy&#039;, you can use the command &#039;zpool import&#039; to view destroyed pools and recover the pool by doing &#039;zpool import &amp;lt;zpool name&amp;gt;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Thu Apr 16, 2020 ==&lt;br /&gt;
We destroyed old db2 on abacus. We put in 20 new disks 7.68 TB and 2 new 2.5 TB disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /scratch /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /srv/db2 raidz2  /dev/sde  /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn&lt;br /&gt;
 zpool add -f /srv/db2 raidz2 /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
OLD:&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mon Apr 20 2020 ==&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db2 raidz2  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh  /dev/sdi  /dev/sdj  /dev/sdk  /dev/sdl /dev/sdm  /dev/sdn &lt;br /&gt;
 zpool add -f db2 raidz2      /dev/sdo  /dev/sdp  /dev/sdq  /dev/sdr  /dev/sds  /dev/sdt  /dev/sdu  /dev/sdv  /dev/sdw  /dev/sdx /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db3 raidz2 /dev/sdaa  /dev/sdab /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj   /dev/sdak /dev/sdal &lt;br /&gt;
 zpool add -f db3 raidz2      /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav /dev/sdaw /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2  /dev/sday /dev/sdaz  /dev/sdba  /dev/sdbb /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh /dev/sdbi /dev/sdbj&lt;br /&gt;
 zpool add -f db5 raidz2      /dev/sdbk  /dev/sdbl  /dev/sdbm  /dev/sdbn  /dev/sdbo  /dev/sdbp  /dev/sdbq  /dev/sdbr  /dev/sdbs /dev/sdbt /dev/sdbu /dev/sdbv&lt;br /&gt;
&lt;br /&gt;
== Tue Apr 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
Ben&#039;s commands:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 3 disk_ids disk_id_&lt;br /&gt;
  db2_disks=`cat disk_id_aa`&lt;br /&gt;
  db3_disks=`cat disk_id_ab`&lt;br /&gt;
  db5_disks=`cat disk_id_ac`&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks&lt;br /&gt;
  reboot&lt;br /&gt;
&lt;br /&gt;
Amended commands, Apr 22- based on advice from john that vdevs should be limited to 12 disks each:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 6 disk_ids disk_id_&lt;br /&gt;
&lt;br /&gt;
  db2_disks_1=`cat disk_id_aa`&lt;br /&gt;
  db2_disks_2=`cat disk_id_ab`&lt;br /&gt;
&lt;br /&gt;
  db3_disks_1=`cat disk_id_ac`&lt;br /&gt;
  db3_disks_2=`cat disk_id_ad`&lt;br /&gt;
&lt;br /&gt;
  db5_disks_1=`cat disk_id_ae`&lt;br /&gt;
  db5_disks_2=`cat disk_id_af`&lt;br /&gt;
&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks_1&lt;br /&gt;
  zpool add -f db2 raidz2 $db2_disks_2&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks_1&lt;br /&gt;
  zpool add -f db3 raidz2 $db3_disks_2&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks_1&lt;br /&gt;
  zpool add -f db5 raidz2 $db5_disks_2&lt;br /&gt;
&lt;br /&gt;
== Mon Jul 20 2020 ==&lt;br /&gt;
*sda sdb are system disks&lt;br /&gt;
*sdc sdd are 240GB SSD&lt;br /&gt;
*sde is 480GB SSD&lt;br /&gt;
&lt;br /&gt;
nfs-exb in n-1-30&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exb raidz2 sdf sdg sdh sdi sdj sdk raidz2 sdl sdm sdn sdo sdp sdq raidz2 sdr sds sdt sdu sdv sdw raidz2 sdx sdy sdz sdaa sdab sdac  raidz2 sdad sdae sdaf sdag &lt;br /&gt;
 sdah sdai raidz2 sdaj sdak sdal sdam sdan sdao log mirror sdc sdd cache sde&lt;br /&gt;
&lt;br /&gt;
== Tue Jul 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
nfs-exc in n-1-109:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exc raidz2 sdf sdg sdh sdi sdj sdk\ &lt;br /&gt;
                     raidz2 sdl sdm sdn sdo sdp sdq\ &lt;br /&gt;
                     raidz2 sdr sds sdt sdu sdv sdw\ &lt;br /&gt;
                     raidz2 sdx sdy sdz sdaa sdab sdac\  &lt;br /&gt;
                     raidz2 sdad sdae sdaf sdag sdah sdai\ &lt;br /&gt;
                     raidz2 sdaj sdak sdal sdam sdan sdao\ &lt;br /&gt;
                     log mirror sdc sdd\ &lt;br /&gt;
                     cache sde&lt;br /&gt;
&lt;br /&gt;
nfs-exd in n-1-113:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exd raidz2 sdd sde sdf sdg sdh sdi\ &lt;br /&gt;
                     raidz2 sdj sdk sdl sdm sdn sdo\ &lt;br /&gt;
                     raidz2 sdp sdq sdr sds sdt sdu\ &lt;br /&gt;
                     raidz2 sdv sdw sdx sdy sdz sdaa\  &lt;br /&gt;
                     raidz2 sdab sdac sdad sdae sdaf sdag\ &lt;br /&gt;
                     raidz2 sdah sdai sdaj sdak sdal sdam\ &lt;br /&gt;
                     log mirror sdc sdan\ &lt;br /&gt;
                     cache sdao&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== zpool destroy : Failed to unmount &amp;lt;device&amp;gt; - device busy ==&lt;br /&gt;
&lt;br /&gt;
The help text will advise you to check lsof or fuser, but really what you need to do is stop the nfs service&lt;br /&gt;
&lt;br /&gt;
  systemctl stop nfs&lt;br /&gt;
  umount /export/ex*&lt;br /&gt;
  zpool destroy ...&lt;br /&gt;
  zpool create ...&lt;br /&gt;
  zpool ...&lt;br /&gt;
  ...&lt;br /&gt;
  systemctl start nfs&lt;br /&gt;
&lt;br /&gt;
== zpool missing after reboot ==&lt;br /&gt;
This is due to zfs-import-cache failed to start at boot time.&lt;br /&gt;
 # check&lt;br /&gt;
 $ systemctl status zfs-import-cache.service&lt;br /&gt;
 # enable at boot time&lt;br /&gt;
 $ systemctl enable zfs-import-cache.service&lt;br /&gt;
&lt;br /&gt;
== Example: Fixing degraded pool, replacing faulted disk ==&lt;br /&gt;
On Feb 22, 2019, one of nfs-ex9&#039;s disks became faulty.  &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
 pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices are faulted in response to persistent errors.&lt;br /&gt;
 	Sufficient replicas exist for the pool to continue functioning in a&lt;br /&gt;
 	degraded state.&lt;br /&gt;
 action: Replace the faulted device, or use &#039;zpool clear&#039; to mark the device&lt;br /&gt;
 	repaired.&lt;br /&gt;
   scan: scrub canceled on Fri Feb 22 11:31:25 2019&lt;br /&gt;
 config:&lt;br /&gt;
          raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
 sdae                                        ONLINE       0     0     0&lt;br /&gt;
 sdaf                                        ONLINE       0     0     0&lt;br /&gt;
 sdag                                        ONLINE       0     0     0&lt;br /&gt;
 sdah                                        FAULTED     18     0     0  too many errors&lt;br /&gt;
 sdai                                        ONLINE       0     0     0&lt;br /&gt;
 sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I did the following: &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool offline ex9 sdb&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then I went to the server room to see that disk 1 still had a red light due to the fault.  I pulled the disk out.  Inserted a fresh one of the same brand, a Seagate Exos X12.  The server detected the new disk and set the disk name as /dev/sdb, just like the one I just pulled out.  Finally, I did the following command. &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool replace ex9 /dev/sdah&#039;&#039;&#039;&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
  pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices is currently being resilvered.  The pool will&lt;br /&gt;
 continue to function, possibly in a degraded state.&lt;br /&gt;
 action: Wait for the resilver to complete.&lt;br /&gt;
  scan: resilver in progress since Tue Mar 19 14:06:33 2019&lt;br /&gt;
 1.37G scanned out of 51.8T at 127M/s, 118h33m to go&lt;br /&gt;
 37.9M resilvered, 0.00% done&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 	  raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    replacing-3                                 DEGRADED     0     0     0&lt;br /&gt;
      old                                       FAULTED     18     0     0  too many errors&lt;br /&gt;
      sdah                                      ONLINE       0     0     0  (resilvering)&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Resilvering is the process of a disk being rebuilt from its parity group.  Once it is finished, you should be good to go again. &lt;br /&gt;
&lt;br /&gt;
For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id.&lt;br /&gt;
 raidz2-4                  DEGRADED     0     0     0&lt;br /&gt;
 scsi-35000c500a7da67cb  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7daa34f  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7db39db  FAULTED      0     0     0  too many errors&lt;br /&gt;
 scsi-35000c500a7da6b97  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da265b  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da740f  ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
In this case, we have to determine the vdev name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk&lt;br /&gt;
 $ dmesg | tail&lt;br /&gt;
 [14663327.192519] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] Spinning up disk...&lt;br /&gt;
 [14663327.192756] sd 0:0:38:0: Attached scsi generic sg27 type 0&lt;br /&gt;
 [14663328.193173] ........................ready&lt;br /&gt;
 [14663352.681625] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)&lt;br /&gt;
 [14663352.681627] sd 0:0:38:0: [sdad] 4096-byte physical blocks&lt;br /&gt;
 [14663352.687268] sd 0:0:38:0: [sdad] Write Protect is off&lt;br /&gt;
 [14663352.687273] sd 0:0:38:0: [sdad] Mode Sense: db 00 10 08&lt;br /&gt;
 [14663352.690847] sd 0:0:38:0: [sdad] Write cache: enabled, read cache: enabled, supports DPO and FUA&lt;br /&gt;
 [14663352.732297] sd 0:0:38:0: [sdad] Attached SCSI disk&lt;br /&gt;
&lt;br /&gt;
Once determine the name, we will start the resilvering process &lt;br /&gt;
 $ zpool replace exa  scsi-35000c500a7db39db sdad&lt;br /&gt;
 # scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status&lt;br /&gt;
 # sdad is the vdev-id of the new replacement disk determined above&lt;br /&gt;
&lt;br /&gt;
== Disk LED light ==&lt;br /&gt;
=== Identify failed disk by LED light ===&lt;br /&gt;
By disk_id&lt;br /&gt;
 # turn light off&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/&amp;lt;disk_id&amp;gt; &lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/&amp;lt;disk_id&amp;gt;&lt;br /&gt;
 Example&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
By vdev&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate_off=/dev/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 Example &lt;br /&gt;
 $ ledctl locate_off=/dev/sdaf&lt;br /&gt;
 $ ledctl locate=/dev/sdaf&lt;br /&gt;
&lt;br /&gt;
=== Reset light from LED light glitch ===&lt;br /&gt;
For qof/nfs-ex9, we had an issue with the disk LED for /dev/sdah still showing up red despite the resilvering occurring.  To return the disk LED to a normal status, issue the following command: &lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/&amp;lt;disk vdev id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/sdah&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
or for zayin/nfs-exa, disk are identify by id&lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/&amp;lt;disk id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Curator]][[Category:Sysadmin]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Zfs&amp;diff=13406</id>
		<title>Zfs</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Zfs&amp;diff=13406"/>
		<updated>2021-03-25T21:25:51Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Checking Disk Health and Integrity */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ZFS - Zettabyte Filesystem&lt;br /&gt;
&lt;br /&gt;
== ZFS packages installation ==&lt;br /&gt;
https://www.symmcom.com/docs/how-tos/storages/how-to-install-zfs-on-centos-7&lt;br /&gt;
&lt;br /&gt;
ZFS rpm pkg must match the CentOS version install in the machine&lt;br /&gt;
* Check CentOS version&lt;br /&gt;
 $ cat /etc/centos-release&lt;br /&gt;
 CentOS Linux release 7.8.2003 (Core)&lt;br /&gt;
&lt;br /&gt;
* Install ZFS-release package. In this case, you will need to install the package for CentOS 7.8 version&lt;br /&gt;
 $ yum install http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm&lt;br /&gt;
 &lt;br /&gt;
* Edit /etc/yum.repos.d/zfs.repo&lt;br /&gt;
The ZFS package that we want to install is zfs-kmod&lt;br /&gt;
 $ vim /etc/yum.repos.d/zfs.repo&lt;br /&gt;
 There are 2 items to change&lt;br /&gt;
 [zfs]&lt;br /&gt;
 name=ZFS on Linux for EL7 - dkms&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/$basearch/&lt;br /&gt;
 enabled=1 -&amp;gt; &amp;lt;b&amp;gt;change to 0&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
 [zfs-kmod]&lt;br /&gt;
 name=ZFS on Linux for EL7 - kmod&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/kmod/$basearch/&lt;br /&gt;
 enabled=0 &amp;lt;b&amp;gt;change to 1&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
&lt;br /&gt;
* Install ZFS&lt;br /&gt;
 $ yum install zfs&lt;br /&gt;
&lt;br /&gt;
* Useful Commands to see if any disk are avaible&lt;br /&gt;
 $ lsblk -f &lt;br /&gt;
 $ fdisk -l &lt;br /&gt;
 $ fdisk -l  /dev/sd* |grep  &#039;^Disk&#039; |grep sectors |grep -v experimental&lt;br /&gt;
&lt;br /&gt;
== Beginning ZFS instances ==&lt;br /&gt;
&lt;br /&gt;
There are only two commmands to interact with ZFS.  &lt;br /&gt;
&lt;br /&gt;
 zpool: used to create a ZFS vdev (virtual device).  vdevs are composed of physical devices.  &lt;br /&gt;
 zfs: used to create/interact with a ZFS dataset.  ZFS datasets are akin to logical volumes&lt;br /&gt;
&lt;br /&gt;
 # zpool creation syntax&lt;br /&gt;
 zpool create &amp;lt;poolname&amp;gt; &amp;lt;vdev(s)&amp;gt; &lt;br /&gt;
 # Create a zpool of six raidz2 vdevs, each with six drives.  Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache.  (example commmand was run on qof) &lt;br /&gt;
 zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN&lt;br /&gt;
  [root@qof ~]# zpool status&lt;br /&gt;
  pool: ex9&lt;br /&gt;
  state: ONLINE&lt;br /&gt;
  scan: none requested&lt;br /&gt;
  config:&lt;br /&gt;
  NAME                                            STATE     READ WRITE CKSUM&lt;br /&gt;
  ex9                                             ONLINE       0     0     0&lt;br /&gt;
  raidz2-0                                      ONLINE       0     0     0&lt;br /&gt;
    sda                                         ONLINE       0     0     0&lt;br /&gt;
    sdb                                         ONLINE       0     0     0&lt;br /&gt;
    sdc                                         ONLINE       0     0     0&lt;br /&gt;
    sdd                                         ONLINE       0     0     0&lt;br /&gt;
    sde                                         ONLINE       0     0     0&lt;br /&gt;
    sdf                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-1                                      ONLINE       0     0     0&lt;br /&gt;
    sdg                                         ONLINE       0     0     0&lt;br /&gt;
    sdh                                         ONLINE       0     0     0&lt;br /&gt;
    sdi                                         ONLINE       0     0     0&lt;br /&gt;
    sdj                                         ONLINE       0     0     0&lt;br /&gt;
    sdk                                         ONLINE       0     0     0&lt;br /&gt;
    sdl                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-2                                      ONLINE       0     0     0&lt;br /&gt;
    sdm                                         ONLINE       0     0     0&lt;br /&gt;
    sdn                                         ONLINE       0     0     0&lt;br /&gt;
    sdo                                         ONLINE       0     0     0&lt;br /&gt;
    sdp                                         ONLINE       0     0     0&lt;br /&gt;
    sdq                                         ONLINE       0     0     0&lt;br /&gt;
    sdr                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-3                                      ONLINE       0     0     0&lt;br /&gt;
    sds                                         ONLINE       0     0     0&lt;br /&gt;
    sdt                                         ONLINE       0     0     0&lt;br /&gt;
    sdu                                         ONLINE       0     0     0&lt;br /&gt;
    sdv                                         ONLINE       0     0     0&lt;br /&gt;
    sdw                                         ONLINE       0     0     0&lt;br /&gt;
    sdx                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-4                                      ONLINE       0     0     0&lt;br /&gt;
    sdy                                         ONLINE       0     0     0&lt;br /&gt;
    sdz                                         ONLINE       0     0     0&lt;br /&gt;
    sdaa                                        ONLINE       0     0     0&lt;br /&gt;
    sdab                                        ONLINE       0     0     0&lt;br /&gt;
    sdac                                        ONLINE       0     0     0&lt;br /&gt;
    sdad                                        ONLINE       0     0     0&lt;br /&gt;
  raidz2-5                                      ONLINE       0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    sdah                                        ONLINE       0     0     0&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
  logs&lt;br /&gt;
  mirror-6                                      ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN  ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN  ONLINE       0     0     0&lt;br /&gt;
  cache&lt;br /&gt;
  ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN    ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Adding a zfs filesystem: &lt;br /&gt;
&lt;br /&gt;
Using qof as an example, I will create a child filesystem under ex9 named archive that will be mounted under /export/ex9/archive.  This archive will be used to backup user data.&lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 -bash-4.2$ sudo zfs create -o mountpoint=/export/ex9/archive ex9/archive &lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 ex9/archive   192K   249T   192K  /export/ex9/archive&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
NOTE: It is important to reboot machine to see if the zpool stay on. &lt;br /&gt;
&lt;br /&gt;
== Add alias to machine and mount point to puppet ==&lt;br /&gt;
&lt;br /&gt;
Please see [http://wiki.docking.org/index.php/PuppetTricks#Adding_aliases_for_server_on_Alpha this section]&lt;br /&gt;
&lt;br /&gt;
== Adding L2ARC Read Cache to a zpool==&lt;br /&gt;
 # Look for available SSDs in /dev/disk/by-id/&lt;br /&gt;
 # Choose an available SSD to use for read cache.  Then decide which pool you want to put the cache on. &lt;br /&gt;
 Syntax: zpool add &amp;lt;zpool name&amp;gt; &amp;lt;cache/log&amp;gt; &amp;lt;path to disk&amp;gt;&lt;br /&gt;
 $ sudo zpool add ex6 cache /dev/disk/by-id/ata-INTEL_SSDSC2KG480G7_BTYM72830AV6480BGN&lt;br /&gt;
&lt;br /&gt;
== Tuning ZFS options ==&lt;br /&gt;
  # stores extended attributes as system attributes to improve performance&lt;br /&gt;
  $ zfs xattr=sa &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn on ZFS lz4 compression.  Use this for compressible dataset such as many files with text &lt;br /&gt;
  $ zfs set compression=lz4 &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn off access time for improved disk performance (so that the OS doesn&#039;t write a new time every time a file is accessed)&lt;br /&gt;
  $ zfs set atime=off &amp;lt;zfs dataset name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  NOTE: ZFS performance degrades tremendously when the zpool is over 80% used.  To avoid this, I have set a quota to 80% of the 248TB in qof/nfs-ex9.&lt;br /&gt;
  # To set a quota of 200TB on ZFS dataset:&lt;br /&gt;
  $ zfs set quota=200T &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  # To remove a quota from a ZFS dataset:&lt;br /&gt;
  $ zfs set quota=none &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, ZFS pools/mounts do not have ACLs active.  &lt;br /&gt;
  # to active access control lists on a zpool&lt;br /&gt;
  $ sudo zfs set acltype=posixacl &amp;lt;pool name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Checking Disk Health and Integrity ==&lt;br /&gt;
Print a brief summary of all pools:&lt;br /&gt;
 zpool list&lt;br /&gt;
&lt;br /&gt;
Print a detailed status of each disk and status of pool:&lt;br /&gt;
 zpool status&lt;br /&gt;
&lt;br /&gt;
Clear read errors on disk, if not anything serious:&lt;br /&gt;
 zpool clear &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:&lt;br /&gt;
 zpool scrub &amp;lt;pool_name&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
To stop scrub:&lt;br /&gt;
 zpool scrub -s &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount after reboot ==&lt;br /&gt;
 zfs set mountpoint=/export/db2 db2 &lt;br /&gt;
&lt;br /&gt;
== when you put in a new disk ==&lt;br /&gt;
 fdisk -l &lt;br /&gt;
to see what is new&lt;br /&gt;
&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zfs unmount db3&lt;br /&gt;
&lt;br /&gt;
 zfs mount db3&lt;br /&gt;
&lt;br /&gt;
= latest = &lt;br /&gt;
 zpool create -f db3 raidz2  /dev/sdy /dev/sdz  /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj&lt;br /&gt;
 zpool add -f db3 raidz2 /dev/sdak  /dev/sdal  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db4 raidz2 /dev/sdax /dev/sday /dev/sdaz /dev/sdba  /dev/sdbb  /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh  /dev/sdbi &lt;br /&gt;
 zpool add -f db4 raidz2 /dev/sdbj /dev/sdbk /dev/sdbl /dev/sdbm /dev/sdbn /dev/sdbo /dev/sdbp /dev/sdbq /dev/sdbr /dev/sdbs /dev/sdbt /dev/sdbu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Fri Jan 19 2018 = &lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2 /dev/sdbw /dev/sdbx /dev/sdby /dev/sdbz /dev/sdca  /dev/sdcb  /dev/sdcc  /dev/sdcd  /dev/sdce  /dev/sdcf  /dev/sdcg  /dev/sdch&lt;br /&gt;
 zpool add -f db5 raidz2 /dev/sdci /dev/sdcj /dev/sdck /dev/sdcl /dev/sdcm /dev/sdcn /dev/sdco /dev/sdcp /dev/sdcq /dev/sdcr /dev/sdcs /dev/sdct&lt;br /&gt;
 zfs mount db5&lt;br /&gt;
&lt;br /&gt;
= Wed Jan 24 2018 = &lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae&lt;br /&gt;
 zpool add -f ex1 mirror /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj&lt;br /&gt;
 zpool add -f ex2 /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt&lt;br /&gt;
 zpool add -f ex3 mirror /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal&lt;br /&gt;
 zpool add -f ex4 mirror /dev/sdam /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror  /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi mirror /dev/sdj /dev/sdk mirror /dev/sdl /dev/sdm mirror /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq mirror /dev/sdr /dev/sds mirro /dev/sdt /dev/sdu mirror /dev/sdv /dev/sdw mirror /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal  mirror /dev/sdam mirror /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On lamed&lt;br /&gt;
 zpool create -f ex5 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex6 mirror  /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf mirror /dev/sdg /dev/sdh mirror /dev/sdi /dev/sdj&lt;br /&gt;
 zpool create -f ex7 mirror  /dev/sdk /dev/sdl mirror /dev/sdm /dev/sdn mirror /dev/sdo /dev/sdp mirror /dev/sdq /dev/sdr mirror /dev/sds /dev/sdt&lt;br /&gt;
 zpool create -f ex8 mirror /dev/sdu /dev/sdv mirror /dev/sdw /dev/sdx mirror /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Sun Jan 19 2020 = &lt;br /&gt;
&lt;br /&gt;
on mem2,  sql system, note sda and sdc are system disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f sql1 raidz2  /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm&lt;br /&gt;
 zpool add     -f sql1 raidz2  /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
transform db4 on n-9-22 from z2 to z0&lt;br /&gt;
&lt;br /&gt;
 zpool destroy db4&lt;br /&gt;
 zpool create -f db4 raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
&lt;br /&gt;
zfs mount &lt;br /&gt;
== recovery from accidental pool destruction ==&lt;br /&gt;
 umount /mnt /mnt2&lt;br /&gt;
 mdadm -S /dev/md125/dev/md126/dev/md127&lt;br /&gt;
&lt;br /&gt;
 sfdisk -d /dev/sda &amp;lt; sda.sfdisk&lt;br /&gt;
 sfdisk -d /dev/sdb &amp;lt; sdb.sfdisk&lt;br /&gt;
 sfdisk /dev/sda &amp;lt; sdb.sfdisk&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 mdadm -A -R /dev/md127/dev/sdb2/dev/sda2&lt;br /&gt;
 mdadm /dev/md127 -a /dev/sda2&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 echo check &amp;gt; /sys/block/md127/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 mdadm -A -R /dev/md126/dev/sdb3/dev/sda3&lt;br /&gt;
 mdadm /dev/md126 -a /dev/sda3&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 echo check &amp;gt; /sys/block/md126/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
Also switched the bios to boot from hd2 instead of hd1 (or something)&lt;br /&gt;
&lt;br /&gt;
* Recreate zpool with correct drives&lt;br /&gt;
* Point an instance photorec at each of the wiped drives set to recover files of the following types: .gz, .solv (custom definition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE:  If you destroyed your zpool with command &#039;zpool destroy&#039;, you can use the command &#039;zpool import&#039; to view destroyed pools and recover the pool by doing &#039;zpool import &amp;lt;zpool name&amp;gt;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Thu Apr 16, 2020 ==&lt;br /&gt;
We destroyed old db2 on abacus. We put in 20 new disks 7.68 TB and 2 new 2.5 TB disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /scratch /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /srv/db2 raidz2  /dev/sde  /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn&lt;br /&gt;
 zpool add -f /srv/db2 raidz2 /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
OLD:&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mon Apr 20 2020 ==&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db2 raidz2  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh  /dev/sdi  /dev/sdj  /dev/sdk  /dev/sdl /dev/sdm  /dev/sdn &lt;br /&gt;
 zpool add -f db2 raidz2      /dev/sdo  /dev/sdp  /dev/sdq  /dev/sdr  /dev/sds  /dev/sdt  /dev/sdu  /dev/sdv  /dev/sdw  /dev/sdx /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db3 raidz2 /dev/sdaa  /dev/sdab /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj   /dev/sdak /dev/sdal &lt;br /&gt;
 zpool add -f db3 raidz2      /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav /dev/sdaw /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2  /dev/sday /dev/sdaz  /dev/sdba  /dev/sdbb /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh /dev/sdbi /dev/sdbj&lt;br /&gt;
 zpool add -f db5 raidz2      /dev/sdbk  /dev/sdbl  /dev/sdbm  /dev/sdbn  /dev/sdbo  /dev/sdbp  /dev/sdbq  /dev/sdbr  /dev/sdbs /dev/sdbt /dev/sdbu /dev/sdbv&lt;br /&gt;
&lt;br /&gt;
== Tue Apr 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
Ben&#039;s commands:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 3 disk_ids disk_id_&lt;br /&gt;
  db2_disks=`cat disk_id_aa`&lt;br /&gt;
  db3_disks=`cat disk_id_ab`&lt;br /&gt;
  db5_disks=`cat disk_id_ac`&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks&lt;br /&gt;
  reboot&lt;br /&gt;
&lt;br /&gt;
Amended commands, Apr 22- based on advice from john that vdevs should be limited to 12 disks each:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 6 disk_ids disk_id_&lt;br /&gt;
&lt;br /&gt;
  db2_disks_1=`cat disk_id_aa`&lt;br /&gt;
  db2_disks_2=`cat disk_id_ab`&lt;br /&gt;
&lt;br /&gt;
  db3_disks_1=`cat disk_id_ac`&lt;br /&gt;
  db3_disks_2=`cat disk_id_ad`&lt;br /&gt;
&lt;br /&gt;
  db5_disks_1=`cat disk_id_ae`&lt;br /&gt;
  db5_disks_2=`cat disk_id_af`&lt;br /&gt;
&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks_1&lt;br /&gt;
  zpool add -f db2 raidz2 $db2_disks_2&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks_1&lt;br /&gt;
  zpool add -f db3 raidz2 $db3_disks_2&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks_1&lt;br /&gt;
  zpool add -f db5 raidz2 $db5_disks_2&lt;br /&gt;
&lt;br /&gt;
== Mon Jul 20 2020 ==&lt;br /&gt;
*sda sdb are system disks&lt;br /&gt;
*sdc sdd are 240GB SSD&lt;br /&gt;
*sde is 480GB SSD&lt;br /&gt;
&lt;br /&gt;
nfs-exb in n-1-30&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exb raidz2 sdf sdg sdh sdi sdj sdk raidz2 sdl sdm sdn sdo sdp sdq raidz2 sdr sds sdt sdu sdv sdw raidz2 sdx sdy sdz sdaa sdab sdac  raidz2 sdad sdae sdaf sdag &lt;br /&gt;
 sdah sdai raidz2 sdaj sdak sdal sdam sdan sdao log mirror sdc sdd cache sde&lt;br /&gt;
&lt;br /&gt;
== Tue Jul 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
nfs-exc in n-1-109:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exc raidz2 sdf sdg sdh sdi sdj sdk\ &lt;br /&gt;
                     raidz2 sdl sdm sdn sdo sdp sdq\ &lt;br /&gt;
                     raidz2 sdr sds sdt sdu sdv sdw\ &lt;br /&gt;
                     raidz2 sdx sdy sdz sdaa sdab sdac\  &lt;br /&gt;
                     raidz2 sdad sdae sdaf sdag sdah sdai\ &lt;br /&gt;
                     raidz2 sdaj sdak sdal sdam sdan sdao\ &lt;br /&gt;
                     log mirror sdc sdd\ &lt;br /&gt;
                     cache sde&lt;br /&gt;
&lt;br /&gt;
nfs-exd in n-1-113:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exd raidz2 sdd sde sdf sdg sdh sdi\ &lt;br /&gt;
                     raidz2 sdj sdk sdl sdm sdn sdo\ &lt;br /&gt;
                     raidz2 sdp sdq sdr sds sdt sdu\ &lt;br /&gt;
                     raidz2 sdv sdw sdx sdy sdz sdaa\  &lt;br /&gt;
                     raidz2 sdab sdac sdad sdae sdaf sdag\ &lt;br /&gt;
                     raidz2 sdah sdai sdaj sdak sdal sdam\ &lt;br /&gt;
                     log mirror sdc sdan\ &lt;br /&gt;
                     cache sdao&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== zpool destroy : Failed to unmount &amp;lt;device&amp;gt; - device busy ==&lt;br /&gt;
&lt;br /&gt;
The help text will advise you to check lsof or fuser, but really what you need to do is stop the nfs service&lt;br /&gt;
&lt;br /&gt;
  systemctl stop nfs&lt;br /&gt;
  umount /export/ex*&lt;br /&gt;
  zpool destroy ...&lt;br /&gt;
  zpool create ...&lt;br /&gt;
  zpool ...&lt;br /&gt;
  ...&lt;br /&gt;
  systemctl start nfs&lt;br /&gt;
&lt;br /&gt;
== zpool missing after reboot ==&lt;br /&gt;
This is due to zfs-import-cache failed to start at boot time.&lt;br /&gt;
 # check&lt;br /&gt;
 $ systemctl status zfs-import-cache.service&lt;br /&gt;
 # enable at boot time&lt;br /&gt;
 $ systemctl enable zfs-import-cache.service&lt;br /&gt;
&lt;br /&gt;
== Example: Fixing degraded pool, replacing faulted disk ==&lt;br /&gt;
On Feb 22, 2019, one of nfs-ex9&#039;s disks became faulty.  &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
 pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices are faulted in response to persistent errors.&lt;br /&gt;
 	Sufficient replicas exist for the pool to continue functioning in a&lt;br /&gt;
 	degraded state.&lt;br /&gt;
 action: Replace the faulted device, or use &#039;zpool clear&#039; to mark the device&lt;br /&gt;
 	repaired.&lt;br /&gt;
   scan: scrub canceled on Fri Feb 22 11:31:25 2019&lt;br /&gt;
 config:&lt;br /&gt;
          raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
 sdae                                        ONLINE       0     0     0&lt;br /&gt;
 sdaf                                        ONLINE       0     0     0&lt;br /&gt;
 sdag                                        ONLINE       0     0     0&lt;br /&gt;
 sdah                                        FAULTED     18     0     0  too many errors&lt;br /&gt;
 sdai                                        ONLINE       0     0     0&lt;br /&gt;
 sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I did the following: &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool offline ex9 sdb&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then I went to the server room to see that disk 1 still had a red light due to the fault.  I pulled the disk out.  Inserted a fresh one of the same brand, a Seagate Exos X12.  The server detected the new disk and set the disk name as /dev/sdb, just like the one I just pulled out.  Finally, I did the following command. &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool replace ex9 /dev/sdah&#039;&#039;&#039;&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
  pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices is currently being resilvered.  The pool will&lt;br /&gt;
 continue to function, possibly in a degraded state.&lt;br /&gt;
 action: Wait for the resilver to complete.&lt;br /&gt;
  scan: resilver in progress since Tue Mar 19 14:06:33 2019&lt;br /&gt;
 1.37G scanned out of 51.8T at 127M/s, 118h33m to go&lt;br /&gt;
 37.9M resilvered, 0.00% done&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 	  raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    replacing-3                                 DEGRADED     0     0     0&lt;br /&gt;
      old                                       FAULTED     18     0     0  too many errors&lt;br /&gt;
      sdah                                      ONLINE       0     0     0  (resilvering)&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Resilvering is the process of a disk being rebuilt from its parity group.  Once it is finished, you should be good to go again. &lt;br /&gt;
&lt;br /&gt;
For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id.&lt;br /&gt;
 raidz2-4                  DEGRADED     0     0     0&lt;br /&gt;
 scsi-35000c500a7da67cb  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7daa34f  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7db39db  FAULTED      0     0     0  too many errors&lt;br /&gt;
 scsi-35000c500a7da6b97  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da265b  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da740f  ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
In this case, we have to determine the vdev name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk&lt;br /&gt;
 $ dmesg | tail&lt;br /&gt;
 [14663327.192519] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] Spinning up disk...&lt;br /&gt;
 [14663327.192756] sd 0:0:38:0: Attached scsi generic sg27 type 0&lt;br /&gt;
 [14663328.193173] ........................ready&lt;br /&gt;
 [14663352.681625] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)&lt;br /&gt;
 [14663352.681627] sd 0:0:38:0: [sdad] 4096-byte physical blocks&lt;br /&gt;
 [14663352.687268] sd 0:0:38:0: [sdad] Write Protect is off&lt;br /&gt;
 [14663352.687273] sd 0:0:38:0: [sdad] Mode Sense: db 00 10 08&lt;br /&gt;
 [14663352.690847] sd 0:0:38:0: [sdad] Write cache: enabled, read cache: enabled, supports DPO and FUA&lt;br /&gt;
 [14663352.732297] sd 0:0:38:0: [sdad] Attached SCSI disk&lt;br /&gt;
&lt;br /&gt;
Once determine the name, we will start the resilvering process &lt;br /&gt;
 $ zpool replace exa  scsi-35000c500a7db39db sdad&lt;br /&gt;
 # scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status&lt;br /&gt;
 # sdad is the vdev-id of the new replacement disk determined above&lt;br /&gt;
&lt;br /&gt;
== Disk LED light ==&lt;br /&gt;
=== Identify failed disk by LED light ===&lt;br /&gt;
By disk_id&lt;br /&gt;
 # turn light off&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/&amp;lt;disk_id&amp;gt; &lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/&amp;lt;disk_id&amp;gt;&lt;br /&gt;
 Example&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
By vdev&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate_off=/dev/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 Example &lt;br /&gt;
 $ ledctl locate_off=/dev/sdaf&lt;br /&gt;
 $ ledctl locate=/dev/sdaf&lt;br /&gt;
&lt;br /&gt;
=== Reset light from LED light glitch ===&lt;br /&gt;
For qof/nfs-ex9, we had an issue with the disk LED for /dev/sdah still showing up red despite the resilvering occurring.  To return the disk LED to a normal status, issue the following command: &lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/&amp;lt;disk vdev id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/sdah&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
or for zayin/nfs-exa, disk are identify by id&lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/&amp;lt;disk id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Curator]][[Category:Sysadmin]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Zfs&amp;diff=13405</id>
		<title>Zfs</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Zfs&amp;diff=13405"/>
		<updated>2021-03-25T21:24:14Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* situation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ZFS - Zettabyte Filesystem&lt;br /&gt;
&lt;br /&gt;
== ZFS packages installation ==&lt;br /&gt;
https://www.symmcom.com/docs/how-tos/storages/how-to-install-zfs-on-centos-7&lt;br /&gt;
&lt;br /&gt;
ZFS rpm pkg must match the CentOS version install in the machine&lt;br /&gt;
* Check CentOS version&lt;br /&gt;
 $ cat /etc/centos-release&lt;br /&gt;
 CentOS Linux release 7.8.2003 (Core)&lt;br /&gt;
&lt;br /&gt;
* Install ZFS-release package. In this case, you will need to install the package for CentOS 7.8 version&lt;br /&gt;
 $ yum install http://download.zfsonlinux.org/epel/zfs-release.el7_8.noarch.rpm&lt;br /&gt;
 &lt;br /&gt;
* Edit /etc/yum.repos.d/zfs.repo&lt;br /&gt;
The ZFS package that we want to install is zfs-kmod&lt;br /&gt;
 $ vim /etc/yum.repos.d/zfs.repo&lt;br /&gt;
 There are 2 items to change&lt;br /&gt;
 [zfs]&lt;br /&gt;
 name=ZFS on Linux for EL7 - dkms&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/$basearch/&lt;br /&gt;
 enabled=1 -&amp;gt; &amp;lt;b&amp;gt;change to 0&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
 [zfs-kmod]&lt;br /&gt;
 name=ZFS on Linux for EL7 - kmod&lt;br /&gt;
 baseurl=http://download.zfsonlinux.org/epel/7.8/kmod/$basearch/&lt;br /&gt;
 enabled=0 &amp;lt;b&amp;gt;change to 1&amp;lt;/b&amp;gt;&lt;br /&gt;
 metadata_expire=7d&lt;br /&gt;
 gpgcheck=1&lt;br /&gt;
 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux&lt;br /&gt;
&lt;br /&gt;
* Install ZFS&lt;br /&gt;
 $ yum install zfs&lt;br /&gt;
&lt;br /&gt;
* Useful Commands to see if any disk are avaible&lt;br /&gt;
 $ lsblk -f &lt;br /&gt;
 $ fdisk -l &lt;br /&gt;
 $ fdisk -l  /dev/sd* |grep  &#039;^Disk&#039; |grep sectors |grep -v experimental&lt;br /&gt;
&lt;br /&gt;
== Beginning ZFS instances ==&lt;br /&gt;
&lt;br /&gt;
There are only two commmands to interact with ZFS.  &lt;br /&gt;
&lt;br /&gt;
 zpool: used to create a ZFS vdev (virtual device).  vdevs are composed of physical devices.  &lt;br /&gt;
 zfs: used to create/interact with a ZFS dataset.  ZFS datasets are akin to logical volumes&lt;br /&gt;
&lt;br /&gt;
 # zpool creation syntax&lt;br /&gt;
 zpool create &amp;lt;poolname&amp;gt; &amp;lt;vdev(s)&amp;gt; &lt;br /&gt;
 # Create a zpool of six raidz2 vdevs, each with six drives.  Includes two SSDs to used as a mirrored SLOG and one SSD as an L2ARC read cache.  (example commmand was run on qof) &lt;br /&gt;
 zpool create ex9 raidz2 sda sdb sdc sdd sde sdf raidz2 sdg sdh sdi sdj sdk sdl raidz2 sdm sdn sdo sdp sdq sdr raidz2 sds sdt sdu sdv sdw sdx raidz2 sdy sdz sdaa sdab sdac sdad raidz2 sdae sdaf sdag sdah sdai sdaj log mirror ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN cache ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN&lt;br /&gt;
  [root@qof ~]# zpool status&lt;br /&gt;
  pool: ex9&lt;br /&gt;
  state: ONLINE&lt;br /&gt;
  scan: none requested&lt;br /&gt;
  config:&lt;br /&gt;
  NAME                                            STATE     READ WRITE CKSUM&lt;br /&gt;
  ex9                                             ONLINE       0     0     0&lt;br /&gt;
  raidz2-0                                      ONLINE       0     0     0&lt;br /&gt;
    sda                                         ONLINE       0     0     0&lt;br /&gt;
    sdb                                         ONLINE       0     0     0&lt;br /&gt;
    sdc                                         ONLINE       0     0     0&lt;br /&gt;
    sdd                                         ONLINE       0     0     0&lt;br /&gt;
    sde                                         ONLINE       0     0     0&lt;br /&gt;
    sdf                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-1                                      ONLINE       0     0     0&lt;br /&gt;
    sdg                                         ONLINE       0     0     0&lt;br /&gt;
    sdh                                         ONLINE       0     0     0&lt;br /&gt;
    sdi                                         ONLINE       0     0     0&lt;br /&gt;
    sdj                                         ONLINE       0     0     0&lt;br /&gt;
    sdk                                         ONLINE       0     0     0&lt;br /&gt;
    sdl                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-2                                      ONLINE       0     0     0&lt;br /&gt;
    sdm                                         ONLINE       0     0     0&lt;br /&gt;
    sdn                                         ONLINE       0     0     0&lt;br /&gt;
    sdo                                         ONLINE       0     0     0&lt;br /&gt;
    sdp                                         ONLINE       0     0     0&lt;br /&gt;
    sdq                                         ONLINE       0     0     0&lt;br /&gt;
    sdr                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-3                                      ONLINE       0     0     0&lt;br /&gt;
    sds                                         ONLINE       0     0     0&lt;br /&gt;
    sdt                                         ONLINE       0     0     0&lt;br /&gt;
    sdu                                         ONLINE       0     0     0&lt;br /&gt;
    sdv                                         ONLINE       0     0     0&lt;br /&gt;
    sdw                                         ONLINE       0     0     0&lt;br /&gt;
    sdx                                         ONLINE       0     0     0&lt;br /&gt;
  raidz2-4                                      ONLINE       0     0     0&lt;br /&gt;
    sdy                                         ONLINE       0     0     0&lt;br /&gt;
    sdz                                         ONLINE       0     0     0&lt;br /&gt;
    sdaa                                        ONLINE       0     0     0&lt;br /&gt;
    sdab                                        ONLINE       0     0     0&lt;br /&gt;
    sdac                                        ONLINE       0     0     0&lt;br /&gt;
    sdad                                        ONLINE       0     0     0&lt;br /&gt;
  raidz2-5                                      ONLINE       0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    sdah                                        ONLINE       0     0     0&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
  logs&lt;br /&gt;
  mirror-6                                      ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM740603E0480BGN  ONLINE       0     0     0&lt;br /&gt;
    ata-INTEL_SSDSC2KG480G7_BTYM7406019K480BGN  ONLINE       0     0     0&lt;br /&gt;
  cache&lt;br /&gt;
  ata-INTEL_SSDSC2KG480G7_BTYM740602GN480BGN    ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Adding a zfs filesystem: &lt;br /&gt;
&lt;br /&gt;
Using qof as an example, I will create a child filesystem under ex9 named archive that will be mounted under /export/ex9/archive.  This archive will be used to backup user data.&lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 -bash-4.2$ sudo zfs create -o mountpoint=/export/ex9/archive ex9/archive &lt;br /&gt;
 -bash-4.2$ zfs list&lt;br /&gt;
 NAME          USED  AVAIL  REFER  MOUNTPOINT&lt;br /&gt;
 ex9          2.39T   249T  2.39T  /export/ex9&lt;br /&gt;
 ex9/archive   192K   249T   192K  /export/ex9/archive&lt;br /&gt;
&lt;br /&gt;
== Testing ==&lt;br /&gt;
&lt;br /&gt;
NOTE: It is important to reboot machine to see if the zpool stay on. &lt;br /&gt;
&lt;br /&gt;
== Add alias to machine and mount point to puppet ==&lt;br /&gt;
&lt;br /&gt;
Please see [http://wiki.docking.org/index.php/PuppetTricks#Adding_aliases_for_server_on_Alpha this section]&lt;br /&gt;
&lt;br /&gt;
== Adding L2ARC Read Cache to a zpool==&lt;br /&gt;
 # Look for available SSDs in /dev/disk/by-id/&lt;br /&gt;
 # Choose an available SSD to use for read cache.  Then decide which pool you want to put the cache on. &lt;br /&gt;
 Syntax: zpool add &amp;lt;zpool name&amp;gt; &amp;lt;cache/log&amp;gt; &amp;lt;path to disk&amp;gt;&lt;br /&gt;
 $ sudo zpool add ex6 cache /dev/disk/by-id/ata-INTEL_SSDSC2KG480G7_BTYM72830AV6480BGN&lt;br /&gt;
&lt;br /&gt;
== Tuning ZFS options ==&lt;br /&gt;
  # stores extended attributes as system attributes to improve performance&lt;br /&gt;
  $ zfs xattr=sa &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn on ZFS lz4 compression.  Use this for compressible dataset such as many files with text &lt;br /&gt;
  $ zfs set compression=lz4 &amp;lt;zfs dataset name&amp;gt; &lt;br /&gt;
  &lt;br /&gt;
  # Turn off access time for improved disk performance (so that the OS doesn&#039;t write a new time every time a file is accessed)&lt;br /&gt;
  $ zfs set atime=off &amp;lt;zfs dataset name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  NOTE: ZFS performance degrades tremendously when the zpool is over 80% used.  To avoid this, I have set a quota to 80% of the 248TB in qof/nfs-ex9.&lt;br /&gt;
  # To set a quota of 200TB on ZFS dataset:&lt;br /&gt;
  $ zfs set quota=200T &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  # To remove a quota from a ZFS dataset:&lt;br /&gt;
  $ zfs set quota=none &amp;lt;zfs dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, ZFS pools/mounts do not have ACLs active.  &lt;br /&gt;
  # to active access control lists on a zpool&lt;br /&gt;
  $ sudo zfs set acltype=posixacl &amp;lt;pool name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Checking Disk Health and Integrity ==&lt;br /&gt;
Print a brief summary of all pools:&lt;br /&gt;
 zpool list&lt;br /&gt;
&lt;br /&gt;
Print a detailed status of each disk and status of pool:&lt;br /&gt;
 zpool status&lt;br /&gt;
&lt;br /&gt;
Clear read errors on disk, if not anything serious:&lt;br /&gt;
 zpool clear &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check data integrity, traverses all the data in the pool once and verifies that all blocks can be read:&lt;br /&gt;
 zpool scrub &amp;lt;pool_name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== mount after reboot ==&lt;br /&gt;
 zfs set mountpoint=/export/db2 db2 &lt;br /&gt;
&lt;br /&gt;
== when you put in a new disk ==&lt;br /&gt;
 fdisk -l &lt;br /&gt;
to see what is new&lt;br /&gt;
&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zfs unmount db3&lt;br /&gt;
&lt;br /&gt;
 zfs mount db3&lt;br /&gt;
&lt;br /&gt;
= latest = &lt;br /&gt;
 zpool create -f db3 raidz2  /dev/sdy /dev/sdz  /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj&lt;br /&gt;
 zpool add -f db3 raidz2 /dev/sdak  /dev/sdal  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db4 raidz2 /dev/sdax /dev/sday /dev/sdaz /dev/sdba  /dev/sdbb  /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh  /dev/sdbi &lt;br /&gt;
 zpool add -f db4 raidz2 /dev/sdbj /dev/sdbk /dev/sdbl /dev/sdbm /dev/sdbn /dev/sdbo /dev/sdbp /dev/sdbq /dev/sdbr /dev/sdbs /dev/sdbt /dev/sdbu&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Fri Jan 19 2018 = &lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2 /dev/sdbw /dev/sdbx /dev/sdby /dev/sdbz /dev/sdca  /dev/sdcb  /dev/sdcc  /dev/sdcd  /dev/sdce  /dev/sdcf  /dev/sdcg  /dev/sdch&lt;br /&gt;
 zpool add -f db5 raidz2 /dev/sdci /dev/sdcj /dev/sdck /dev/sdcl /dev/sdcm /dev/sdcn /dev/sdco /dev/sdcp /dev/sdcq /dev/sdcr /dev/sdcs /dev/sdct&lt;br /&gt;
 zfs mount db5&lt;br /&gt;
&lt;br /&gt;
= Wed Jan 24 2018 = &lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab /dev/sdac /dev/sdad /dev/sdae&lt;br /&gt;
 zpool add -f ex1 mirror /dev/sdaf /dev/sdag /dev/sdah /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj&lt;br /&gt;
 zpool add -f ex2 /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt&lt;br /&gt;
 zpool add -f ex3 mirror /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal&lt;br /&gt;
 zpool add -f ex4 mirror /dev/sdam /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On tsadi&lt;br /&gt;
 zpool create -f ex1 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex2 mirror  /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi mirror /dev/sdj /dev/sdk mirror /dev/sdl /dev/sdm mirror /dev/sdn /dev/sdo&lt;br /&gt;
 zpool create -f ex3 mirror /dev/sdp /dev/sdq mirror /dev/sdr /dev/sds mirro /dev/sdt /dev/sdu mirror /dev/sdv /dev/sdw mirror /dev/sdx /dev/sdy&lt;br /&gt;
 zpool create -f ex4 mirror /dev/sdz /dev/sdak /dev/sdal  mirror /dev/sdam mirror /dev/sdan /dev/sdao&lt;br /&gt;
&lt;br /&gt;
On lamed&lt;br /&gt;
 zpool create -f ex5 mirror /dev/sdaa /dev/sdab mirror /dev/sdac /dev/sdad mirror /dev/sdae /dev/sdaf mirror /dev/sdag /dev/sdah mirror  /dev/sdai /dev/sdaj&lt;br /&gt;
 zpool create -f ex6 mirror  /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf mirror /dev/sdg /dev/sdh mirror /dev/sdi /dev/sdj&lt;br /&gt;
 zpool create -f ex7 mirror  /dev/sdk /dev/sdl mirror /dev/sdm /dev/sdn mirror /dev/sdo /dev/sdp mirror /dev/sdq /dev/sdr mirror /dev/sds /dev/sdt&lt;br /&gt;
 zpool create -f ex8 mirror /dev/sdu /dev/sdv mirror /dev/sdw /dev/sdx mirror /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Sun Jan 19 2020 = &lt;br /&gt;
&lt;br /&gt;
on mem2,  sql system, note sda and sdc are system disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f sql1 raidz2  /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm&lt;br /&gt;
 zpool add     -f sql1 raidz2  /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
transform db4 on n-9-22 from z2 to z0&lt;br /&gt;
&lt;br /&gt;
 zpool destroy db4&lt;br /&gt;
 zpool create -f db4 raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx /dev/sdy&lt;br /&gt;
&lt;br /&gt;
zfs mount &lt;br /&gt;
== recovery from accidental pool destruction ==&lt;br /&gt;
 umount /mnt /mnt2&lt;br /&gt;
 mdadm -S /dev/md125/dev/md126/dev/md127&lt;br /&gt;
&lt;br /&gt;
 sfdisk -d /dev/sda &amp;lt; sda.sfdisk&lt;br /&gt;
 sfdisk -d /dev/sdb &amp;lt; sdb.sfdisk&lt;br /&gt;
 sfdisk /dev/sda &amp;lt; sdb.sfdisk&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 mdadm -A -R /dev/md127/dev/sdb2/dev/sda2&lt;br /&gt;
 mdadm /dev/md127 -a /dev/sda2&lt;br /&gt;
 mdadm --detail /dev/md127&lt;br /&gt;
 echo check &amp;gt; /sys/block/md127/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 mdadm -A -R /dev/md126/dev/sdb3/dev/sda3&lt;br /&gt;
 mdadm /dev/md126 -a /dev/sda3&lt;br /&gt;
 mdadm --detail /dev/md126&lt;br /&gt;
 echo check &amp;gt; /sys/block/md126/md/sync_action&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
Also switched the bios to boot from hd2 instead of hd1 (or something)&lt;br /&gt;
&lt;br /&gt;
* Recreate zpool with correct drives&lt;br /&gt;
* Point an instance photorec at each of the wiped drives set to recover files of the following types: .gz, .solv (custom definition)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
NOTE:  If you destroyed your zpool with command &#039;zpool destroy&#039;, you can use the command &#039;zpool import&#039; to view destroyed pools and recover the pool by doing &#039;zpool import &amp;lt;zpool name&amp;gt;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Thu Apr 16, 2020 ==&lt;br /&gt;
We destroyed old db2 on abacus. We put in 20 new disks 7.68 TB and 2 new 2.5 TB disks&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /scratch /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
 zpool create -f /srv/db2 raidz2  /dev/sde  /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn&lt;br /&gt;
 zpool add -f /srv/db2 raidz2 /dev/sdo /dev/sdp /dev/sdq /dev/sdr /dev/sds /dev/sdt /dev/sdu /dev/sdv /dev/sdw /dev/sdx&lt;br /&gt;
&lt;br /&gt;
OLD:&lt;br /&gt;
 sudo zpool create -f /srv/db3 raidz2 /dev/sdaa  /dev/sdab  /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj  /dev/sdak  /dev/sdal  &lt;br /&gt;
 sudo zpool add -f /srv/db3 raidz2  /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav  /dev/sdaw  /dev/sdax&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mon Apr 20 2020 ==&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db2 raidz2  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg  /dev/sdh  /dev/sdi  /dev/sdj  /dev/sdk  /dev/sdl /dev/sdm  /dev/sdn &lt;br /&gt;
 zpool add -f db2 raidz2      /dev/sdo  /dev/sdp  /dev/sdq  /dev/sdr  /dev/sds  /dev/sdt  /dev/sdu  /dev/sdv  /dev/sdw  /dev/sdx /dev/sdy /dev/sdz&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db3 raidz2 /dev/sdaa  /dev/sdab /dev/sdac  /dev/sdad  /dev/sdae  /dev/sdaf  /dev/sdag  /dev/sdah  /dev/sdai  /dev/sdaj   /dev/sdak /dev/sdal &lt;br /&gt;
 zpool add -f db3 raidz2      /dev/sdam  /dev/sdan  /dev/sdao  /dev/sdap  /dev/sdaq  /dev/sdar  /dev/sdas  /dev/sdat  /dev/sdau  /dev/sdav /dev/sdaw /dev/sdax&lt;br /&gt;
&lt;br /&gt;
 zpool create -f db5 raidz2  /dev/sday /dev/sdaz  /dev/sdba  /dev/sdbb /dev/sdbc  /dev/sdbd  /dev/sdbe  /dev/sdbf  /dev/sdbg  /dev/sdbh /dev/sdbi /dev/sdbj&lt;br /&gt;
 zpool add -f db5 raidz2      /dev/sdbk  /dev/sdbl  /dev/sdbm  /dev/sdbn  /dev/sdbo  /dev/sdbp  /dev/sdbq  /dev/sdbr  /dev/sdbs /dev/sdbt /dev/sdbu /dev/sdbv&lt;br /&gt;
&lt;br /&gt;
== Tue Apr 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
Ben&#039;s commands:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 3 disk_ids disk_id_&lt;br /&gt;
  db2_disks=`cat disk_id_aa`&lt;br /&gt;
  db3_disks=`cat disk_id_ab`&lt;br /&gt;
  db5_disks=`cat disk_id_ac`&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks&lt;br /&gt;
  reboot&lt;br /&gt;
&lt;br /&gt;
Amended commands, Apr 22- based on advice from john that vdevs should be limited to 12 disks each:&lt;br /&gt;
&lt;br /&gt;
  fdisk -l 2&amp;gt;/dev/null | grep -o &amp;quot;zfs.*&amp;quot; &amp;gt; disk_ids&lt;br /&gt;
  split -n 6 disk_ids disk_id_&lt;br /&gt;
&lt;br /&gt;
  db2_disks_1=`cat disk_id_aa`&lt;br /&gt;
  db2_disks_2=`cat disk_id_ab`&lt;br /&gt;
&lt;br /&gt;
  db3_disks_1=`cat disk_id_ac`&lt;br /&gt;
  db3_disks_2=`cat disk_id_ad`&lt;br /&gt;
&lt;br /&gt;
  db5_disks_1=`cat disk_id_ae`&lt;br /&gt;
  db5_disks_2=`cat disk_id_af`&lt;br /&gt;
&lt;br /&gt;
  zpool create -f db2 raidz2 $db2_disks_1&lt;br /&gt;
  zpool add -f db2 raidz2 $db2_disks_2&lt;br /&gt;
  zpool create -f db3 raidz2 $db3_disks_1&lt;br /&gt;
  zpool add -f db3 raidz2 $db3_disks_2&lt;br /&gt;
  zpool create -f db5 raidz2 $db5_disks_1&lt;br /&gt;
  zpool add -f db5 raidz2 $db5_disks_2&lt;br /&gt;
&lt;br /&gt;
== Mon Jul 20 2020 ==&lt;br /&gt;
*sda sdb are system disks&lt;br /&gt;
*sdc sdd are 240GB SSD&lt;br /&gt;
*sde is 480GB SSD&lt;br /&gt;
&lt;br /&gt;
nfs-exb in n-1-30&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exb raidz2 sdf sdg sdh sdi sdj sdk raidz2 sdl sdm sdn sdo sdp sdq raidz2 sdr sds sdt sdu sdv sdw raidz2 sdx sdy sdz sdaa sdab sdac  raidz2 sdad sdae sdaf sdag &lt;br /&gt;
 sdah sdai raidz2 sdaj sdak sdal sdam sdan sdao log mirror sdc sdd cache sde&lt;br /&gt;
&lt;br /&gt;
== Tue Jul 21 2020 ==&lt;br /&gt;
&lt;br /&gt;
nfs-exc in n-1-109:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exc raidz2 sdf sdg sdh sdi sdj sdk\ &lt;br /&gt;
                     raidz2 sdl sdm sdn sdo sdp sdq\ &lt;br /&gt;
                     raidz2 sdr sds sdt sdu sdv sdw\ &lt;br /&gt;
                     raidz2 sdx sdy sdz sdaa sdab sdac\  &lt;br /&gt;
                     raidz2 sdad sdae sdaf sdag sdah sdai\ &lt;br /&gt;
                     raidz2 sdaj sdak sdal sdam sdan sdao\ &lt;br /&gt;
                     log mirror sdc sdd\ &lt;br /&gt;
                     cache sde&lt;br /&gt;
&lt;br /&gt;
nfs-exd in n-1-113:&lt;br /&gt;
&lt;br /&gt;
 zpool create -f exd raidz2 sdd sde sdf sdg sdh sdi\ &lt;br /&gt;
                     raidz2 sdj sdk sdl sdm sdn sdo\ &lt;br /&gt;
                     raidz2 sdp sdq sdr sds sdt sdu\ &lt;br /&gt;
                     raidz2 sdv sdw sdx sdy sdz sdaa\  &lt;br /&gt;
                     raidz2 sdab sdac sdad sdae sdaf sdag\ &lt;br /&gt;
                     raidz2 sdah sdai sdaj sdak sdal sdam\ &lt;br /&gt;
                     log mirror sdc sdan\ &lt;br /&gt;
                     cache sdao&lt;br /&gt;
&lt;br /&gt;
= Troubleshooting =&lt;br /&gt;
&lt;br /&gt;
== zpool destroy : Failed to unmount &amp;lt;device&amp;gt; - device busy ==&lt;br /&gt;
&lt;br /&gt;
The help text will advise you to check lsof or fuser, but really what you need to do is stop the nfs service&lt;br /&gt;
&lt;br /&gt;
  systemctl stop nfs&lt;br /&gt;
  umount /export/ex*&lt;br /&gt;
  zpool destroy ...&lt;br /&gt;
  zpool create ...&lt;br /&gt;
  zpool ...&lt;br /&gt;
  ...&lt;br /&gt;
  systemctl start nfs&lt;br /&gt;
&lt;br /&gt;
== zpool missing after reboot ==&lt;br /&gt;
This is due to zfs-import-cache failed to start at boot time.&lt;br /&gt;
 # check&lt;br /&gt;
 $ systemctl status zfs-import-cache.service&lt;br /&gt;
 # enable at boot time&lt;br /&gt;
 $ systemctl enable zfs-import-cache.service&lt;br /&gt;
&lt;br /&gt;
== Example: Fixing degraded pool, replacing faulted disk ==&lt;br /&gt;
On Feb 22, 2019, one of nfs-ex9&#039;s disks became faulty.  &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
 pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices are faulted in response to persistent errors.&lt;br /&gt;
 	Sufficient replicas exist for the pool to continue functioning in a&lt;br /&gt;
 	degraded state.&lt;br /&gt;
 action: Replace the faulted device, or use &#039;zpool clear&#039; to mark the device&lt;br /&gt;
 	repaired.&lt;br /&gt;
   scan: scrub canceled on Fri Feb 22 11:31:25 2019&lt;br /&gt;
 config:&lt;br /&gt;
          raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
 sdae                                        ONLINE       0     0     0&lt;br /&gt;
 sdaf                                        ONLINE       0     0     0&lt;br /&gt;
 sdag                                        ONLINE       0     0     0&lt;br /&gt;
 sdah                                        FAULTED     18     0     0  too many errors&lt;br /&gt;
 sdai                                        ONLINE       0     0     0&lt;br /&gt;
 sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I did the following: &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool offline ex9 sdb&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Then I went to the server room to see that disk 1 still had a red light due to the fault.  I pulled the disk out.  Inserted a fresh one of the same brand, a Seagate Exos X12.  The server detected the new disk and set the disk name as /dev/sdb, just like the one I just pulled out.  Finally, I did the following command. &lt;br /&gt;
&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;sudo zpool replace ex9 /dev/sdah&#039;&#039;&#039;&lt;br /&gt;
 -bash-4.2$ &#039;&#039;&#039;zpool status&#039;&#039;&#039;&lt;br /&gt;
  pool: ex9&lt;br /&gt;
 state: DEGRADED&lt;br /&gt;
 status: One or more devices is currently being resilvered.  The pool will&lt;br /&gt;
 continue to function, possibly in a degraded state.&lt;br /&gt;
 action: Wait for the resilver to complete.&lt;br /&gt;
  scan: resilver in progress since Tue Mar 19 14:06:33 2019&lt;br /&gt;
 1.37G scanned out of 51.8T at 127M/s, 118h33m to go&lt;br /&gt;
 37.9M resilvered, 0.00% done&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 .&lt;br /&gt;
 	  raidz2-5                                      DEGRADED     0     0     0&lt;br /&gt;
    sdae                                        ONLINE       0     0     0&lt;br /&gt;
    sdaf                                        ONLINE       0     0     0&lt;br /&gt;
    sdag                                        ONLINE       0     0     0&lt;br /&gt;
    replacing-3                                 DEGRADED     0     0     0&lt;br /&gt;
      old                                       FAULTED     18     0     0  too many errors&lt;br /&gt;
      sdah                                      ONLINE       0     0     0  (resilvering)&lt;br /&gt;
    sdai                                        ONLINE       0     0     0&lt;br /&gt;
    sdaj                                        ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
Resilvering is the process of a disk being rebuilt from its parity group.  Once it is finished, you should be good to go again. &lt;br /&gt;
&lt;br /&gt;
For zayin/nfs-exa, some of the disks are named by id instead of the vdev-id.&lt;br /&gt;
 raidz2-4                  DEGRADED     0     0     0&lt;br /&gt;
 scsi-35000c500a7da67cb  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7daa34f  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7db39db  FAULTED      0     0     0  too many errors&lt;br /&gt;
 scsi-35000c500a7da6b97  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da265b  ONLINE       0     0     0&lt;br /&gt;
 scsi-35000c500a7da740f  ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
In this case, we have to determine the vdev name of the new disk disk just got inserted with dmesg. Look for log that mentioning about an new disk&lt;br /&gt;
 $ dmesg | tail&lt;br /&gt;
 [14663327.192519] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] Spinning up disk...&lt;br /&gt;
 [14663327.192756] sd 0:0:38:0: Attached scsi generic sg27 type 0&lt;br /&gt;
 [14663328.193173] ........................ready&lt;br /&gt;
 [14663352.681625] sd 0:0:38:0: [&#039;&#039;&#039;sdad&#039;&#039;&#039;] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)&lt;br /&gt;
 [14663352.681627] sd 0:0:38:0: [sdad] 4096-byte physical blocks&lt;br /&gt;
 [14663352.687268] sd 0:0:38:0: [sdad] Write Protect is off&lt;br /&gt;
 [14663352.687273] sd 0:0:38:0: [sdad] Mode Sense: db 00 10 08&lt;br /&gt;
 [14663352.690847] sd 0:0:38:0: [sdad] Write cache: enabled, read cache: enabled, supports DPO and FUA&lt;br /&gt;
 [14663352.732297] sd 0:0:38:0: [sdad] Attached SCSI disk&lt;br /&gt;
&lt;br /&gt;
Once determine the name, we will start the resilvering process &lt;br /&gt;
 $ zpool replace exa  scsi-35000c500a7db39db sdad&lt;br /&gt;
 # scsi-35000c500a7db39db is the id of the failed disk obtained from zpool status&lt;br /&gt;
 # sdad is the vdev-id of the new replacement disk determined above&lt;br /&gt;
&lt;br /&gt;
== Disk LED light ==&lt;br /&gt;
=== Identify failed disk by LED light ===&lt;br /&gt;
By disk_id&lt;br /&gt;
 # turn light off&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/&amp;lt;disk_id&amp;gt; &lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/&amp;lt;disk_id&amp;gt;&lt;br /&gt;
 Example&lt;br /&gt;
 $ ledctl locate_off=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
 $ ledctl locate=/dev/disk/by-id/scsi-35000c500a7d8137f &lt;br /&gt;
By vdev&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate_off=/dev/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 # turn light on&lt;br /&gt;
 $ ledctl locate=/dev/disk/&amp;lt;vdev&amp;gt;&lt;br /&gt;
 Example &lt;br /&gt;
 $ ledctl locate_off=/dev/sdaf&lt;br /&gt;
 $ ledctl locate=/dev/sdaf&lt;br /&gt;
&lt;br /&gt;
=== Reset light from LED light glitch ===&lt;br /&gt;
For qof/nfs-ex9, we had an issue with the disk LED for /dev/sdah still showing up red despite the resilvering occurring.  To return the disk LED to a normal status, issue the following command: &lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/&amp;lt;disk vdev id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/sdah&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
or for zayin/nfs-exa, disk are identify by id&lt;br /&gt;
 $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/&amp;lt;disk id&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
 Example: $ &#039;&#039;&#039;sudo ledctl normal=/dev/disk/by-id/scsi-35000c500a7db39db&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Curator]][[Category:Sysadmin]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13402</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13402"/>
		<updated>2021-03-24T00:22:31Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Customizing Arthor Code to our needs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Frontend to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13401</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13401"/>
		<updated>2021-03-24T00:21:54Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Customizing Arthor Code to our needs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Code to our needs ==&lt;br /&gt;
The frontend Arthor code is located at &#039;&#039;&#039;/nfs/exc/arthor_configs/*&#039;&#039;&#039; and the &#039;&#039;&#039;*&#039;&#039;&#039; is based on current running version.&lt;br /&gt;
=== Add Arthor Download Options ===&lt;br /&gt;
==== For Arthor 3.4: ====&lt;br /&gt;
1. vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
&lt;br /&gt;
2. search: &#039;&#039;&#039;arthor_tsv_link&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
3. in the div with the class=”dropdown-content”, add these link options and change the number accordingly:&lt;br /&gt;
&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_tsv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; TSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_csv_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; CSV-max&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_50000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-50,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_100000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-100,000&amp;lt;/a&amp;gt;&lt;br /&gt;
               &amp;lt;a id=&amp;quot;arthor_sdf_link_max&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt; SDF-max&amp;lt;/a&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
&lt;br /&gt;
5. search: &#039;&#039;&#039;function $(t){&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. in the function $(t), add these lines:&lt;br /&gt;
&lt;br /&gt;
 if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:t,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i() (&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:5000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_50000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:50000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_50000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_100000&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:100000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_100000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
 if (document.getElementById(&amp;quot;arthor_tsv_link_max&amp;quot;)) {&lt;br /&gt;
        var e=i.a.param({query:s.b.query,type:s.b.type,draw:0,start:0,length:1000000000,flags:s.b.flags}),n=s.b.url+&amp;quot;/dt/&amp;quot;+E(s.b.table)+&amp;quot;/search&amp;quot;;i()(&amp;quot;#arthor_sdf_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_max&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search: &#039;&#039;&#039;Similarity&#039;&#039;&#039;&lt;br /&gt;
 Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
 Then add &amp;quot;first&amp;quot; in Substructure&#039;s class&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 search: &#039;&#039;&#039;table_name&#039;&#039;&#039;&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13399</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13399"/>
		<updated>2021-03-23T00:08:25Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Hyperlink to zinc20 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Code to our needs ==&lt;br /&gt;
If Arthor Sever is launched through &amp;quot;java -jar /opt/nextmove/arthor/arthor-3.3.2-centos7/java/arthor.jar --httpPort=&amp;lt;port&amp;gt;&amp;quot;, find the directory where this line of code was executed. Once found do &#039;&#039;&#039;ls -a&#039;&#039;&#039;, there should be a hidden directory called .extract.&lt;br /&gt;
=== Change Arthor Download Size (Hardcoded) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?#arthor_sdf_link //search this&lt;br /&gt;
 *Look for 0!==arguments[0]?arguments[0]:&amp;lt;number&amp;gt;&lt;br /&gt;
 *Change number to desirec amount&lt;br /&gt;
=== Change Arthor Download Size (Options) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search this: “res-download”&lt;br /&gt;
 in the div with the class=”dropdown-content”&lt;br /&gt;
 add these link options and change the number accordingly:&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-downloafund&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
 then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 For arthor 3.3.2 search this: “function $(){”&lt;br /&gt;
 For arthor 3.3 search this: “function Fs()” or “#arthor_tsv_link”&lt;br /&gt;
 Separate this function from the rest of the code from beginning and end of the function&lt;br /&gt;
 For arthor 3.3.2, edit the function for example:&lt;br /&gt;
 function $(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                     (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
 For arthor 3.3, edit the function for example:&lt;br /&gt;
 function Fs(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t = arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                        arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                        (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 ?Similarity //search this&lt;br /&gt;
 *Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?table_name //search this&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+d+&amp;quot;&#039;&amp;gt;&amp;quot; + d + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13320</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13320"/>
		<updated>2021-03-05T21:38:43Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Mount Local2 using LVM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mount storage disks using LVM ==&lt;br /&gt;
Reference guide: &#039;&#039;&#039;https://www.thegeekdiary.com/redhat-centos-a-beginners-guide-to-lvm-logical-volume-manager/&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Scan devices to be used as physical volumes (PV)&#039;&#039;&#039;&lt;br /&gt;
* lvmdiskscan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Initialize the block devices&#039;&#039;&#039;&lt;br /&gt;
* pvcreate &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
** Example: pvcreate /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To double check PV&#039;&#039;&#039;&lt;br /&gt;
* pvdisplay&lt;br /&gt;
* pvscan&lt;br /&gt;
* pvs&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;After PV, create a Volume Group (VG)&#039;&#039;&#039;&lt;br /&gt;
* vgcreate &amp;lt;name of volume&amp;gt; &amp;lt;drive1&amp;gt; &amp;lt;drive2&amp;gt; &amp;lt;...&amp;gt; ...&lt;br /&gt;
** Example: vgcreate soft2 /dev/sdb /dev/sdc /dev/sdd&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;To double check VG&#039;&#039;&#039;&lt;br /&gt;
* vgs &amp;lt;VG name&amp;gt;&lt;br /&gt;
* vgdisplay &amp;lt;VG name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; After VG, create a Logical Volume (LV) &#039;&#039;&#039;&lt;br /&gt;
* lvcreate &amp;lt;options&amp;gt; &amp;lt;name of LV&amp;gt; &amp;lt;VG Name&amp;gt;&lt;br /&gt;
** Example: lvcreate -l 100%FREE -n soft_lv soft2 &lt;br /&gt;
***-l is for storage space in percent, 100%FREE means use all space in VG&lt;br /&gt;
***-n is for name of LV&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; Double Check LV &#039;&#039;&#039;&lt;br /&gt;
* lvs /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* lvdisplay /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* lvscan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Last step, create a File System&#039;&#039;&#039;&lt;br /&gt;
* mkfs.ext4 &amp;lt;options&amp;gt; /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;&lt;br /&gt;
* Example: mkfs.ext4 /dev/soft2/soft_lv&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039; It&#039;s time to mount &#039;&#039;&#039;&lt;br /&gt;
Make the directory first:&lt;br /&gt;
* mkdir &amp;lt;somewhere&amp;gt;&lt;br /&gt;
** Example: mkdir /export/soft2&lt;br /&gt;
* mount /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt; &amp;lt;somewhere&amp;gt;&lt;br /&gt;
** Example: mount /dev/soft2/soft_lv /export/soft2/&lt;br /&gt;
Make it permanent by editing fstab:&lt;br /&gt;
* vim /etc/fstab&lt;br /&gt;
 /dev/&amp;lt;VG NAME&amp;gt;/&amp;lt;LV NAME&amp;gt;      &amp;lt;somewhere&amp;gt;       &amp;lt;type of file system&amp;gt;      &amp;lt;mount options&amp;gt;      &amp;lt;dump&amp;gt;         &amp;lt;fsck&amp;gt;&lt;br /&gt;
 example:&lt;br /&gt;
 /dev/soft2/soft_lv	/export/soft2	ext4	defaults	1	2&lt;br /&gt;
&lt;br /&gt;
== Partition Disk ==&lt;br /&gt;
* https://www.techotopia.com/index.php/Adding_a_New_Disk_Drive_to_a_CentOS_System&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13319</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13319"/>
		<updated>2021-03-05T00:24:34Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Mount Local2 using LVM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mount Local2 using LVM ==&lt;br /&gt;
Reference guide: &#039;&#039;&#039;https://www.thegeekdiary.com/redhat-centos-a-beginners-guide-to-lvm-logical-volume-manager/&#039;&#039;&#039;&lt;br /&gt;
* pvcreate /dev/sdb1&lt;br /&gt;
* vgcreate zinc /dev/sdb1&lt;br /&gt;
* lvcreate -l 100%FREE -n zinc_lv zinc&lt;br /&gt;
* mkfs.ext4 -L zinc /dev/mapper/zinc-zinc_lv&lt;br /&gt;
* mkdir /local2&lt;br /&gt;
* vim /etc/fstab&lt;br /&gt;
** /dev/zinc/zinc_lv /local2 ext4 defaults 1 2&lt;br /&gt;
* mount /local2&lt;br /&gt;
&lt;br /&gt;
== Partition Disk ==&lt;br /&gt;
* https://www.techotopia.com/index.php/Adding_a_New_Disk_Drive_to_a_CentOS_System&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13318</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13318"/>
		<updated>2021-03-05T00:24:01Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mount Local2 using LVM ==&lt;br /&gt;
* pvcreate /dev/sdb1&lt;br /&gt;
* vgcreate zinc /dev/sdb1&lt;br /&gt;
* lvcreate -l 100%FREE -n zinc_lv zinc&lt;br /&gt;
* mkfs.ext4 -L zinc /dev/mapper/zinc-zinc_lv&lt;br /&gt;
* mkdir /local2&lt;br /&gt;
* vim /etc/fstab&lt;br /&gt;
** /dev/zinc/zinc_lv /local2 ext4 defaults 1 2&lt;br /&gt;
* mount /local2&lt;br /&gt;
&lt;br /&gt;
== Partition Disk ==&lt;br /&gt;
* https://www.techotopia.com/index.php/Adding_a_New_Disk_Drive_to_a_CentOS_System&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13317</id>
		<title>Using LVM To Mount Drives</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Using_LVM_To_Mount_Drives&amp;diff=13317"/>
		<updated>2021-03-04T22:37:33Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Format Disk ==&lt;br /&gt;
Do these commands as root:&lt;br /&gt;
* fdisk -l&lt;br /&gt;
** Find disks/RAID disks to format&lt;br /&gt;
*parted&lt;br /&gt;
*print&lt;br /&gt;
*mklabel gpt&lt;br /&gt;
*mkpart logical 0GB 9995GB. (based on what we read in result of print above)&lt;br /&gt;
*print. (to confirm)&lt;br /&gt;
*quit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mount Local2 ==&lt;br /&gt;
* pvcreate /dev/sdb1&lt;br /&gt;
* vgcreate zinc /dev/sdb1&lt;br /&gt;
* lvcreate -l 100%FREE -n zinc_lv zinc&lt;br /&gt;
* mkfs.ext4 -L zinc /dev/mapper/zinc-zinc_lv&lt;br /&gt;
* mkdir /local2&lt;br /&gt;
* vim /etc/fstab&lt;br /&gt;
** /dev/zinc/zinc_lv /local2 ext4 defaults 1 2&lt;br /&gt;
* mount /local2&lt;br /&gt;
&lt;br /&gt;
== Partition Disk ==&lt;br /&gt;
* https://www.techotopia.com/index.php/Adding_a_New_Disk_Drive_to_a_CentOS_System&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13303</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13303"/>
		<updated>2021-03-01T20:46:12Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Customizing Arthor Code to our needs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Code to our needs ==&lt;br /&gt;
If Arthor Sever is launched through &amp;quot;java -jar /opt/nextmove/arthor/arthor-3.3.2-centos7/java/arthor.jar --httpPort=&amp;lt;port&amp;gt;&amp;quot;, find the directory where this line of code was executed. Once found do &#039;&#039;&#039;ls -a&#039;&#039;&#039;, there should be a hidden directory called .extract.&lt;br /&gt;
=== Change Arthor Download Size (Hardcoded) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?#arthor_sdf_link //search this&lt;br /&gt;
 *Look for 0!==arguments[0]?arguments[0]:&amp;lt;number&amp;gt;&lt;br /&gt;
 *Change number to desirec amount&lt;br /&gt;
=== Change Arthor Download Size (Options) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search this: “res-download”&lt;br /&gt;
 in the div with the class=”dropdown-content”&lt;br /&gt;
 add these link options and change the number accordingly:&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-downloafund&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
 then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 For arthor 3.3.2 search this: “function $(){”&lt;br /&gt;
 For arthor 3.3 search this: “function Fs()” or “#arthor_tsv_link”&lt;br /&gt;
 Separate this function from the rest of the code from beginning and end of the function&lt;br /&gt;
 For arthor 3.3.2, edit the function for example:&lt;br /&gt;
 function $(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                     (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
 For arthor 3.3, edit the function for example:&lt;br /&gt;
 function Fs(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t = arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                        arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                        (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 ?Similarity //search this&lt;br /&gt;
 *Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?table_name //search this&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+h+&amp;quot;&#039;&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
    Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
 To copy smiles in the input box:&lt;br /&gt;
    vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
    search this: “var e=t.src.smiles()”&lt;br /&gt;
    add this after the semi-colon&lt;br /&gt;
        document.getElementById(&amp;quot;ar_text_input&amp;quot;).value = e;&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13302</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13302"/>
		<updated>2021-03-01T20:42:12Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Customizing Arthor Code to our needs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Code to our needs ==&lt;br /&gt;
If Arthor Sever is launched through &amp;quot;java -jar /opt/nextmove/arthor/arthor-3.3.2-centos7/java/arthor.jar --httpPort=&amp;lt;port&amp;gt;&amp;quot;, find the directory where this line of code was executed. Once found do &#039;&#039;&#039;ls -a&#039;&#039;&#039;, there should be a hidden directory called .extract.&lt;br /&gt;
=== Change Arthor Download Size (Hardcoded) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?#arthor_sdf_link //search this&lt;br /&gt;
 *Look for 0!==arguments[0]?arguments[0]:&amp;lt;number&amp;gt;&lt;br /&gt;
 *Change number to desirec amount&lt;br /&gt;
=== Change Arthor Download Size (Options) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 search this: “res-download”&lt;br /&gt;
 in the div with the class=”dropdown-content”&lt;br /&gt;
 add these link options and change the number accordingly:&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_tsv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; TSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-downloafund&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_csv_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; CSV-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-500&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_1000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-1,000&amp;lt;/a&amp;gt;&lt;br /&gt;
                &amp;lt;a id=&amp;quot;arthor_sdf_link_5000&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;i class=&amp;quot;fa fa-download&amp;quot;&amp;gt;&amp;lt;/i&amp;gt; SDF-5,000&amp;lt;/a&amp;gt;&lt;br /&gt;
 then vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 For arthor 3.3.2 search this: “function $(){”&lt;br /&gt;
 For arthor 3.3 search this: “function Fs()” or “#arthor_tsv_link”&lt;br /&gt;
 Separate this function from the rest of the code from beginning and end of the function&lt;br /&gt;
 For arthor 3.3.2, edit the function for example:&lt;br /&gt;
 function $(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+L(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                     (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
 For arthor 3.3, edit the function for example:&lt;br /&gt;
 function Fs(){&lt;br /&gt;
        if(document.getElementById(&amp;quot;arthor_tsv_link&amp;quot;)) {&lt;br /&gt;
                var t = arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                        arguments[0]:500,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                        (&amp;quot;#arthor_sdf_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_1000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:1000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_1000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
        if (document.getElementById(&amp;quot;arthor_tsv_link_5000&amp;quot;)) {&lt;br /&gt;
                var t=arguments.length&amp;gt;0&amp;amp;&amp;amp;void 0!==arguments[0]? &lt;br /&gt;
                      arguments[0]:5000,e=i.a.param({query:arthor.query,type:arthor.type,draw:0,start:0,length:t,flags:arthor.flags}),n=arthor.url+&amp;quot;/dt/&amp;quot;+xs(arthor.table)+&amp;quot;/search&amp;quot;;i() &lt;br /&gt;
                      (&amp;quot;#arthor_sdf_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.sdf?&amp;quot;+e),i()(&amp;quot;#arthor_tsv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.tsv?&amp;quot;+e),i()(&amp;quot;#arthor_csv_link_5000&amp;quot;).attr(&amp;quot;href&amp;quot;,n+&amp;quot;.csv?&amp;quot;+e)&lt;br /&gt;
        }&lt;br /&gt;
 }&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 ?Similarity //search this&lt;br /&gt;
 *Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?table_name //search this&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+h+&amp;quot;&#039;&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Make Input Box Work ===&lt;br /&gt;
 At the end of the Arthor config file add this:&lt;br /&gt;
 Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=List_of_Docking.org_websites&amp;diff=13240</id>
		<title>List of Docking.org websites</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=List_of_Docking.org_websites&amp;diff=13240"/>
		<updated>2021-02-03T00:43:36Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==List==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;col&amp;quot;|  Website &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;|  Server &lt;br /&gt;
! scope=&amp;quot;col&amp;quot;|  Port &lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| Advisor&lt;br /&gt;
| Unknown&lt;br /&gt;
| Unknown&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| BKSLab&lt;br /&gt;
| gimel&lt;br /&gt;
| 5002&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| DOCKBlaster&lt;br /&gt;
| Unknown&lt;br /&gt;
| Unknow&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| DUDE&lt;br /&gt;
| Unknown&lt;br /&gt;
| Unknown&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| Excipient&lt;br /&gt;
| gimel2&lt;br /&gt;
| 8093&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| IrwinLab&lt;br /&gt;
| gimel&lt;br /&gt;
| 5004&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| Tools18(tldr)&lt;br /&gt;
| gimel2&lt;br /&gt;
| 5000&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot;| Transportal&lt;br /&gt;
| Omega&lt;br /&gt;
| 8123&lt;br /&gt;
|}&lt;br /&gt;
{ Sysadmin }&lt;br /&gt;
[[Category:Delete]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13201</id>
		<title>Replacing failed disk on Server</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13201"/>
		<updated>2021-01-26T01:18:21Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Auto-check Disk Machines Python Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to check if Disk failed==&lt;br /&gt;
===Check for the light on disk===&lt;br /&gt;
&lt;br /&gt;
Solid Yellow =&amp;gt; Fail&lt;br /&gt;
&lt;br /&gt;
Blinking Yellow =&amp;gt; Predictive Failure (going to fail soon)&lt;br /&gt;
&lt;br /&gt;
Green =&amp;gt; Normal&lt;br /&gt;
=== Replace disk instruction===&lt;br /&gt;
* Determine what machine the disk below to&lt;br /&gt;
* Press the red button on the disk to turn it off.&lt;br /&gt;
* Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.&lt;br /&gt;
* Find replacement with a similar disk with the same specs&lt;br /&gt;
* Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don&#039;t have to).&lt;br /&gt;
&lt;br /&gt;
=== Auto-check Disk Machines Python Script ===&lt;br /&gt;
In gimel5, there is a python script that runs every day at 12am through crontab under s_jjg.&lt;br /&gt;
&lt;br /&gt;
The file is located at: &#039;&#039;&#039;/nfs/home/jjg/python_scripts/check_for_failed_disks.py&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This script ssh-es into the machines below and runs a command to list the status of disks. (Does not include cluster 0 machines)&lt;br /&gt;
 machines: abacus, n-9-22, tsadi, lamed, qof, zayin, n-1-30, n-1-109, n-1-113, shin&lt;br /&gt;
 data pools: db2, db3, db5, db4, ex1, ex2, ex3, ex4, ex5, ex6, ex7, ex8, ex9, exa, exb, exc, exd, db&lt;br /&gt;
&lt;br /&gt;
If a disk in any of the listed machines report that a disk has failed, the script will email the sysadmins.&lt;br /&gt;
&lt;br /&gt;
Example output:&lt;br /&gt;
   pool: db2&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: db3&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: db5&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: db4&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex1&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex2&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex3&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex4&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex5&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex6&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex7&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex8&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: ex9&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: exa&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: exb&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: exc&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  pool: exd&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
 pool: db2&lt;br /&gt;
 EID:Slt DID State DG     Size Intf Med SED PI SeSz Model            Sp Type&lt;br /&gt;
 8:0      35 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:1      10 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:2      18 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:3      12 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:4      16 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:5      11 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:6      32 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:7      13 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:8      41 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:9      33 Onln   0 3.637 TB SAS  HDD N   N  512B WD4001FYYG-01SL3 U  -    &lt;br /&gt;
 8:10     20 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:11     27 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:12     23 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:13     25 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:14     14 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:15     42 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:16     19 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:17     39 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:18     40 Onln   0 3.637 TB SAS  HDD N   N  512B MB4000JEFNC      U  -    &lt;br /&gt;
 8:19     29 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:20     26 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:21     36 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
 8:22     34 Onln   0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -&lt;br /&gt;
&lt;br /&gt;
== How to check if disk is failed or install correctly==&lt;br /&gt;
=== On Cluster 0 &#039;s machines ===&lt;br /&gt;
1. Log into gimel as root &lt;br /&gt;
 $ ssh root@sgehead1.bkslab.org&lt;br /&gt;
2. Log in as root to the machine that you determined from earlier &lt;br /&gt;
 $ ssh root@&amp;lt;machine_name&amp;gt;&lt;br /&gt;
 Example: RAID 3,6,7 belongs to nfshead2&lt;br /&gt;
3. Run this command&lt;br /&gt;
 $ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config&lt;br /&gt;
&lt;br /&gt;
 Output Example:&lt;br /&gt;
 Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)&lt;br /&gt;
   array A (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array B (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 2 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)&lt;br /&gt;
      physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array C (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)&lt;br /&gt;
      physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)&lt;br /&gt;
   Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)&lt;br /&gt;
   Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)&lt;br /&gt;
   Expander 248 (WWID: 50014380055E913F)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)&lt;br /&gt;
   SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)&lt;br /&gt;
&lt;br /&gt;
=== On &#039;&#039;&#039;shin&#039;&#039;&#039;===&lt;br /&gt;
As root&lt;br /&gt;
 /opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all&lt;br /&gt;
 &amp;lt;pre&amp;gt;Drive /c0/e8/s18 :&lt;br /&gt;
 ================&lt;br /&gt;
&lt;br /&gt;
 -----------------------------------------------------------------------------&lt;br /&gt;
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup&lt;br /&gt;
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare&lt;br /&gt;
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface&lt;br /&gt;
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info&lt;br /&gt;
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign&lt;br /&gt;
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded&lt;br /&gt;
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 - Detailed Information :&lt;br /&gt;
=======================================&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 State :&lt;br /&gt;
======================&lt;br /&gt;
Shield Counter = 0&lt;br /&gt;
Media Error Count = 0&lt;br /&gt;
Other Error Count = 16&lt;br /&gt;
BBM Error Count = 0&lt;br /&gt;
Drive Temperature =  32C (89.60 F)&lt;br /&gt;
Predictive Failure Count = 0&lt;br /&gt;
S.M.A.R.T alert flagged by drive = No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 Device attributes :&lt;br /&gt;
==================================&lt;br /&gt;
SN = Z1Z2S2TL0000C4216E9V&lt;br /&gt;
Manufacturer Id = SEAGATE &lt;br /&gt;
Model Number = ST4000NM0023    &lt;br /&gt;
NAND Vendor = NA&lt;br /&gt;
WWN = 5000C50057DB2A28&lt;br /&gt;
Firmware Revision = 0003&lt;br /&gt;
Firmware Release Number = 03290003&lt;br /&gt;
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]&lt;br /&gt;
Coerced size = 3.637 TB [0x1d1b00000 Sectors]&lt;br /&gt;
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]&lt;br /&gt;
Device Speed = 6.0Gb/s&lt;br /&gt;
Link Speed = 6.0Gb/s&lt;br /&gt;
Write cache = N/A&lt;br /&gt;
Logical Sector Size = 512B&lt;br /&gt;
Physical Sector Size = 512B&lt;br /&gt;
Connector Name = Port 0 - 3 &amp;amp; Port 4 - 7 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== On ZFS machines ===&lt;br /&gt;
 $ zpool status&lt;br /&gt;
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk &#039;&#039;&#039;Read here&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== On Any Raid1 Configurations ===&lt;br /&gt;
Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
[[ Category: Ben ]]&lt;br /&gt;
[[ Category : Sysadmin ]]&lt;br /&gt;
[[Category:Tutorials]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13200</id>
		<title>Replacing failed disk on Server</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13200"/>
		<updated>2021-01-26T00:46:08Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to check if Disk failed==&lt;br /&gt;
===Check for the light on disk===&lt;br /&gt;
&lt;br /&gt;
Solid Yellow =&amp;gt; Fail&lt;br /&gt;
&lt;br /&gt;
Blinking Yellow =&amp;gt; Predictive Failure (going to fail soon)&lt;br /&gt;
&lt;br /&gt;
Green =&amp;gt; Normal&lt;br /&gt;
=== Replace disk instruction===&lt;br /&gt;
* Determine what machine the disk below to&lt;br /&gt;
* Press the red button on the disk to turn it off.&lt;br /&gt;
* Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.&lt;br /&gt;
* Find replacement with a similar disk with the same specs&lt;br /&gt;
* Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don&#039;t have to).&lt;br /&gt;
&lt;br /&gt;
=== Auto-check Disk Machines Python Script ===&lt;br /&gt;
In gimel5, there is a python script that runs every day at 12am through crontab under s_jjg.&lt;br /&gt;
&lt;br /&gt;
The file is located at: &#039;&#039;&#039;/nfs/home/jjg/python_scripts/check_for_failed_disks.py&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This script ssh-es into the machines below and runs a command to list the status of database. (Does not include cluster 0 machines)&lt;br /&gt;
 abacus&lt;br /&gt;
       db2&lt;br /&gt;
       db3&lt;br /&gt;
       db5&lt;br /&gt;
 n-9-22&lt;br /&gt;
       db4&lt;br /&gt;
 tsadi&lt;br /&gt;
       ex1&lt;br /&gt;
       ex2&lt;br /&gt;
       ex3&lt;br /&gt;
       ex4&lt;br /&gt;
 lamed&lt;br /&gt;
       ex5&lt;br /&gt;
       ex6&lt;br /&gt;
       ex7&lt;br /&gt;
       ex8&lt;br /&gt;
 qof&lt;br /&gt;
       ex9&lt;br /&gt;
 zayin&lt;br /&gt;
       exa&lt;br /&gt;
 n-1-30&lt;br /&gt;
       exb&lt;br /&gt;
 n-1-109&lt;br /&gt;
       exc&lt;br /&gt;
 n-1-113&lt;br /&gt;
       exd&lt;br /&gt;
 shin&lt;br /&gt;
       db&lt;br /&gt;
&lt;br /&gt;
== How to check if disk is failed or install correctly==&lt;br /&gt;
=== On Cluster 0 &#039;s machines ===&lt;br /&gt;
1. Log into gimel as root &lt;br /&gt;
 $ ssh root@sgehead1.bkslab.org&lt;br /&gt;
2. Log in as root to the machine that you determined from earlier &lt;br /&gt;
 $ ssh root@&amp;lt;machine_name&amp;gt;&lt;br /&gt;
 Example: RAID 3,6,7 belongs to nfshead2&lt;br /&gt;
3. Run this command&lt;br /&gt;
 $ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config&lt;br /&gt;
&lt;br /&gt;
 Output Example:&lt;br /&gt;
 Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)&lt;br /&gt;
   array A (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array B (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 2 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)&lt;br /&gt;
      physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array C (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)&lt;br /&gt;
      physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)&lt;br /&gt;
   Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)&lt;br /&gt;
   Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)&lt;br /&gt;
   Expander 248 (WWID: 50014380055E913F)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)&lt;br /&gt;
   SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)&lt;br /&gt;
&lt;br /&gt;
=== On &#039;&#039;&#039;shin&#039;&#039;&#039;===&lt;br /&gt;
As root&lt;br /&gt;
 /opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all&lt;br /&gt;
 &amp;lt;pre&amp;gt;Drive /c0/e8/s18 :&lt;br /&gt;
 ================&lt;br /&gt;
&lt;br /&gt;
 -----------------------------------------------------------------------------&lt;br /&gt;
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup&lt;br /&gt;
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare&lt;br /&gt;
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface&lt;br /&gt;
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info&lt;br /&gt;
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign&lt;br /&gt;
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded&lt;br /&gt;
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 - Detailed Information :&lt;br /&gt;
=======================================&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 State :&lt;br /&gt;
======================&lt;br /&gt;
Shield Counter = 0&lt;br /&gt;
Media Error Count = 0&lt;br /&gt;
Other Error Count = 16&lt;br /&gt;
BBM Error Count = 0&lt;br /&gt;
Drive Temperature =  32C (89.60 F)&lt;br /&gt;
Predictive Failure Count = 0&lt;br /&gt;
S.M.A.R.T alert flagged by drive = No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 Device attributes :&lt;br /&gt;
==================================&lt;br /&gt;
SN = Z1Z2S2TL0000C4216E9V&lt;br /&gt;
Manufacturer Id = SEAGATE &lt;br /&gt;
Model Number = ST4000NM0023    &lt;br /&gt;
NAND Vendor = NA&lt;br /&gt;
WWN = 5000C50057DB2A28&lt;br /&gt;
Firmware Revision = 0003&lt;br /&gt;
Firmware Release Number = 03290003&lt;br /&gt;
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]&lt;br /&gt;
Coerced size = 3.637 TB [0x1d1b00000 Sectors]&lt;br /&gt;
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]&lt;br /&gt;
Device Speed = 6.0Gb/s&lt;br /&gt;
Link Speed = 6.0Gb/s&lt;br /&gt;
Write cache = N/A&lt;br /&gt;
Logical Sector Size = 512B&lt;br /&gt;
Physical Sector Size = 512B&lt;br /&gt;
Connector Name = Port 0 - 3 &amp;amp; Port 4 - 7 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== On ZFS machines ===&lt;br /&gt;
 $ zpool status&lt;br /&gt;
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk &#039;&#039;&#039;Read here&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== On Any Raid1 Configurations ===&lt;br /&gt;
Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
[[ Category: Ben ]]&lt;br /&gt;
[[ Category : Sysadmin ]]&lt;br /&gt;
[[Category:Tutorials]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13199</id>
		<title>Replacing failed disk on Server</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13199"/>
		<updated>2021-01-26T00:38:01Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* How to check if Disk failed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to check if Disk failed==&lt;br /&gt;
===Check for the light on disk===&lt;br /&gt;
&lt;br /&gt;
Solid Yellow =&amp;gt; Fail&lt;br /&gt;
&lt;br /&gt;
Blinking Yellow =&amp;gt; Predictive Failure (going to fail soon)&lt;br /&gt;
&lt;br /&gt;
Green =&amp;gt; Normal&lt;br /&gt;
=== Replace disk instruction===&lt;br /&gt;
* Determine what machine the disk below to&lt;br /&gt;
* Press the red button on the disk to turn it off.&lt;br /&gt;
* Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.&lt;br /&gt;
* Find replacement with a similar disk with the same specs&lt;br /&gt;
* Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don&#039;t have to).&lt;br /&gt;
&lt;br /&gt;
=== Auto-check Disk Machines Python Script ===&lt;br /&gt;
In gimel5, there is a python script that runs every day at 12am through crontab under s_jjg.&lt;br /&gt;
&lt;br /&gt;
The file is located at: &#039;&#039;&#039;/nfs/home/jjg/python_scripts/check_for_failed_disks.py&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This script ssh-es into the machines below and runs a command to list the status of database.&lt;br /&gt;
 abacus&lt;br /&gt;
       db2&lt;br /&gt;
       db3&lt;br /&gt;
       db5&lt;br /&gt;
 n-9-22&lt;br /&gt;
       db4&lt;br /&gt;
 tsadi&lt;br /&gt;
       ex1&lt;br /&gt;
       ex2&lt;br /&gt;
       ex3&lt;br /&gt;
       ex4&lt;br /&gt;
 lamed&lt;br /&gt;
       ex5&lt;br /&gt;
       ex6&lt;br /&gt;
       ex7&lt;br /&gt;
       ex8&lt;br /&gt;
 qof&lt;br /&gt;
       ex9&lt;br /&gt;
 zayin&lt;br /&gt;
       exa&lt;br /&gt;
 n-1-30&lt;br /&gt;
       exb&lt;br /&gt;
 n-1-109&lt;br /&gt;
       exc&lt;br /&gt;
 n-1-113&lt;br /&gt;
       exd&lt;br /&gt;
 shin&lt;br /&gt;
       db&lt;br /&gt;
&lt;br /&gt;
== How to check if disk is failed or install correctly==&lt;br /&gt;
=== On Cluster 0 &#039;s machines ===&lt;br /&gt;
1. Log into gimel as root &lt;br /&gt;
 $ ssh root@sgehead1.bkslab.org&lt;br /&gt;
2. Log in as root to the machine that you determined from earlier &lt;br /&gt;
 $ ssh root@&amp;lt;machine_name&amp;gt;&lt;br /&gt;
 Example: RAID 3,6,7 belongs to nfshead2&lt;br /&gt;
3. Run this command&lt;br /&gt;
 $ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config&lt;br /&gt;
&lt;br /&gt;
 Output Example:&lt;br /&gt;
 Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)&lt;br /&gt;
   array A (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array B (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 2 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)&lt;br /&gt;
      physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array C (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)&lt;br /&gt;
      physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)&lt;br /&gt;
   Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)&lt;br /&gt;
   Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)&lt;br /&gt;
   Expander 248 (WWID: 50014380055E913F)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)&lt;br /&gt;
   SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)&lt;br /&gt;
&lt;br /&gt;
=== On &#039;&#039;&#039;shin&#039;&#039;&#039;===&lt;br /&gt;
As root&lt;br /&gt;
 /opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all&lt;br /&gt;
 &amp;lt;pre&amp;gt;Drive /c0/e8/s18 :&lt;br /&gt;
 ================&lt;br /&gt;
&lt;br /&gt;
 -----------------------------------------------------------------------------&lt;br /&gt;
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup&lt;br /&gt;
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare&lt;br /&gt;
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface&lt;br /&gt;
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info&lt;br /&gt;
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign&lt;br /&gt;
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded&lt;br /&gt;
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 - Detailed Information :&lt;br /&gt;
=======================================&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 State :&lt;br /&gt;
======================&lt;br /&gt;
Shield Counter = 0&lt;br /&gt;
Media Error Count = 0&lt;br /&gt;
Other Error Count = 16&lt;br /&gt;
BBM Error Count = 0&lt;br /&gt;
Drive Temperature =  32C (89.60 F)&lt;br /&gt;
Predictive Failure Count = 0&lt;br /&gt;
S.M.A.R.T alert flagged by drive = No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 Device attributes :&lt;br /&gt;
==================================&lt;br /&gt;
SN = Z1Z2S2TL0000C4216E9V&lt;br /&gt;
Manufacturer Id = SEAGATE &lt;br /&gt;
Model Number = ST4000NM0023    &lt;br /&gt;
NAND Vendor = NA&lt;br /&gt;
WWN = 5000C50057DB2A28&lt;br /&gt;
Firmware Revision = 0003&lt;br /&gt;
Firmware Release Number = 03290003&lt;br /&gt;
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]&lt;br /&gt;
Coerced size = 3.637 TB [0x1d1b00000 Sectors]&lt;br /&gt;
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]&lt;br /&gt;
Device Speed = 6.0Gb/s&lt;br /&gt;
Link Speed = 6.0Gb/s&lt;br /&gt;
Write cache = N/A&lt;br /&gt;
Logical Sector Size = 512B&lt;br /&gt;
Physical Sector Size = 512B&lt;br /&gt;
Connector Name = Port 0 - 3 &amp;amp; Port 4 - 7 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== On ZFS machines ===&lt;br /&gt;
 $ zpool status&lt;br /&gt;
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk &#039;&#039;&#039;Read here&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== On Any Raid1 Configurations ===&lt;br /&gt;
Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
[[ Category: Ben ]]&lt;br /&gt;
[[ Category : Sysadmin ]]&lt;br /&gt;
[[Category:Tutorials]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13198</id>
		<title>Replacing failed disk on Server</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13198"/>
		<updated>2021-01-26T00:37:36Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* How to check if Disk failed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to check if Disk failed==&lt;br /&gt;
===Check for the light on disk===&lt;br /&gt;
&lt;br /&gt;
Solid Yellow =&amp;gt; Fail&lt;br /&gt;
&lt;br /&gt;
Blinking Yellow =&amp;gt; Predictive Failure (going to fail soon)&lt;br /&gt;
&lt;br /&gt;
Green =&amp;gt; Normal&lt;br /&gt;
=== Replace disk instruction===&lt;br /&gt;
* Determine what machine the disk below to&lt;br /&gt;
* Press the red button on the disk to turn it off.&lt;br /&gt;
* Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.&lt;br /&gt;
* Find replacement with a similar disk with the same specs&lt;br /&gt;
* Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don&#039;t have to).&lt;br /&gt;
&lt;br /&gt;
=== Auto-check Disk Machines Python Script ===&lt;br /&gt;
In gimel5, there is a python script that runs every day at 12am through crontab under s_jjg.&lt;br /&gt;
&lt;br /&gt;
The file is located at: &#039;&#039;&#039;/nfs/home/jjg/python_scripts/check_for_failed_disks.py&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This script ssh-es into the machines below and runs a command to list the status of database.&lt;br /&gt;
 abacus&lt;br /&gt;
       db2&lt;br /&gt;
       db3&lt;br /&gt;
       db5&lt;br /&gt;
 n-9-22&lt;br /&gt;
       db4&lt;br /&gt;
 tsadi&lt;br /&gt;
       ex1&lt;br /&gt;
       ex2&lt;br /&gt;
       ex3&lt;br /&gt;
       ex4&lt;br /&gt;
 lamed&lt;br /&gt;
       ex5&lt;br /&gt;
       ex6&lt;br /&gt;
       ex7&lt;br /&gt;
       ex8&lt;br /&gt;
 qof&lt;br /&gt;
       ex9&lt;br /&gt;
 zayin&lt;br /&gt;
       exa&lt;br /&gt;
 n-1-30&lt;br /&gt;
       exb&lt;br /&gt;
 n-1-109&lt;br /&gt;
       exc&lt;br /&gt;
 n-1-113&lt;br /&gt;
       exd&lt;br /&gt;
&lt;br /&gt;
== How to check if disk is failed or install correctly==&lt;br /&gt;
=== On Cluster 0 &#039;s machines ===&lt;br /&gt;
1. Log into gimel as root &lt;br /&gt;
 $ ssh root@sgehead1.bkslab.org&lt;br /&gt;
2. Log in as root to the machine that you determined from earlier &lt;br /&gt;
 $ ssh root@&amp;lt;machine_name&amp;gt;&lt;br /&gt;
 Example: RAID 3,6,7 belongs to nfshead2&lt;br /&gt;
3. Run this command&lt;br /&gt;
 $ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config&lt;br /&gt;
&lt;br /&gt;
 Output Example:&lt;br /&gt;
 Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)&lt;br /&gt;
   array A (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array B (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 2 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)&lt;br /&gt;
      physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array C (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)&lt;br /&gt;
      physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)&lt;br /&gt;
   Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)&lt;br /&gt;
   Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)&lt;br /&gt;
   Expander 248 (WWID: 50014380055E913F)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)&lt;br /&gt;
   SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)&lt;br /&gt;
&lt;br /&gt;
=== On &#039;&#039;&#039;shin&#039;&#039;&#039;===&lt;br /&gt;
As root&lt;br /&gt;
 /opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all&lt;br /&gt;
 &amp;lt;pre&amp;gt;Drive /c0/e8/s18 :&lt;br /&gt;
 ================&lt;br /&gt;
&lt;br /&gt;
 -----------------------------------------------------------------------------&lt;br /&gt;
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup&lt;br /&gt;
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare&lt;br /&gt;
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface&lt;br /&gt;
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info&lt;br /&gt;
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign&lt;br /&gt;
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded&lt;br /&gt;
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 - Detailed Information :&lt;br /&gt;
=======================================&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 State :&lt;br /&gt;
======================&lt;br /&gt;
Shield Counter = 0&lt;br /&gt;
Media Error Count = 0&lt;br /&gt;
Other Error Count = 16&lt;br /&gt;
BBM Error Count = 0&lt;br /&gt;
Drive Temperature =  32C (89.60 F)&lt;br /&gt;
Predictive Failure Count = 0&lt;br /&gt;
S.M.A.R.T alert flagged by drive = No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 Device attributes :&lt;br /&gt;
==================================&lt;br /&gt;
SN = Z1Z2S2TL0000C4216E9V&lt;br /&gt;
Manufacturer Id = SEAGATE &lt;br /&gt;
Model Number = ST4000NM0023    &lt;br /&gt;
NAND Vendor = NA&lt;br /&gt;
WWN = 5000C50057DB2A28&lt;br /&gt;
Firmware Revision = 0003&lt;br /&gt;
Firmware Release Number = 03290003&lt;br /&gt;
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]&lt;br /&gt;
Coerced size = 3.637 TB [0x1d1b00000 Sectors]&lt;br /&gt;
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]&lt;br /&gt;
Device Speed = 6.0Gb/s&lt;br /&gt;
Link Speed = 6.0Gb/s&lt;br /&gt;
Write cache = N/A&lt;br /&gt;
Logical Sector Size = 512B&lt;br /&gt;
Physical Sector Size = 512B&lt;br /&gt;
Connector Name = Port 0 - 3 &amp;amp; Port 4 - 7 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== On ZFS machines ===&lt;br /&gt;
 $ zpool status&lt;br /&gt;
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk &#039;&#039;&#039;Read here&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== On Any Raid1 Configurations ===&lt;br /&gt;
Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
[[ Category: Ben ]]&lt;br /&gt;
[[ Category : Sysadmin ]]&lt;br /&gt;
[[Category:Tutorials]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=How_to_Replace_a_Failed_Disk&amp;diff=13197</id>
		<title>How to Replace a Failed Disk</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=How_to_Replace_a_Failed_Disk&amp;diff=13197"/>
		<updated>2021-01-25T22:49:47Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Sysadmin]]&lt;br /&gt;
[[Category:Tutorials]]&lt;br /&gt;
[[Category:Delete]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13196</id>
		<title>Replacing failed disk on Server</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Replacing_failed_disk_on_Server&amp;diff=13196"/>
		<updated>2021-01-25T22:49:21Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How to check if Disk failed==&lt;br /&gt;
===Check for the light on disk===&lt;br /&gt;
&lt;br /&gt;
Solid Yellow =&amp;gt; Fail&lt;br /&gt;
&lt;br /&gt;
Blinking Yellow =&amp;gt; Predictive Failure (going to fail soon)&lt;br /&gt;
&lt;br /&gt;
Green =&amp;gt; Normal&lt;br /&gt;
=== Replace disk instruction===&lt;br /&gt;
* Determine what machine the disk below to&lt;br /&gt;
* Press the red button on the disk to turn it off.&lt;br /&gt;
* Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.&lt;br /&gt;
* Find replacement with a similar disk with the same specs&lt;br /&gt;
* Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don&#039;t have to).&lt;br /&gt;
&lt;br /&gt;
== How to check if disk is failed or install correctly==&lt;br /&gt;
=== On Cluster 0 &#039;s machines ===&lt;br /&gt;
1. Log into gimel as root &lt;br /&gt;
 $ ssh root@sgehead1.bkslab.org&lt;br /&gt;
2. Log in as root to the machine that you determined from earlier &lt;br /&gt;
 $ ssh root@&amp;lt;machine_name&amp;gt;&lt;br /&gt;
 Example: RAID 3,6,7 belongs to nfshead2&lt;br /&gt;
3. Run this command&lt;br /&gt;
 $ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config&lt;br /&gt;
&lt;br /&gt;
 Output Example:&lt;br /&gt;
 Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)&lt;br /&gt;
   array A (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 1 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array B (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 2 (5.5 TB, RAID 1+0, OK)&lt;br /&gt;
      physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)&lt;br /&gt;
      physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   array C (SATA, Unused Space: 0 MB)&lt;br /&gt;
      logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)&lt;br /&gt;
      physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)&lt;br /&gt;
      physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)&lt;br /&gt;
   Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)&lt;br /&gt;
   Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)&lt;br /&gt;
   Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)&lt;br /&gt;
   Expander 248 (WWID: 50014380055E913F)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)&lt;br /&gt;
   Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)&lt;br /&gt;
   SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)&lt;br /&gt;
&lt;br /&gt;
=== On &#039;&#039;&#039;shin&#039;&#039;&#039;===&lt;br /&gt;
As root&lt;br /&gt;
 /opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all&lt;br /&gt;
 &amp;lt;pre&amp;gt;Drive /c0/e8/s18 :&lt;br /&gt;
 ================&lt;br /&gt;
&lt;br /&gt;
 -----------------------------------------------------------------------------&lt;br /&gt;
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    &lt;br /&gt;
-----------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup&lt;br /&gt;
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare&lt;br /&gt;
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface&lt;br /&gt;
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info&lt;br /&gt;
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign&lt;br /&gt;
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded&lt;br /&gt;
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 - Detailed Information :&lt;br /&gt;
=======================================&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 State :&lt;br /&gt;
======================&lt;br /&gt;
Shield Counter = 0&lt;br /&gt;
Media Error Count = 0&lt;br /&gt;
Other Error Count = 16&lt;br /&gt;
BBM Error Count = 0&lt;br /&gt;
Drive Temperature =  32C (89.60 F)&lt;br /&gt;
Predictive Failure Count = 0&lt;br /&gt;
S.M.A.R.T alert flagged by drive = No&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Drive /c0/e8/s18 Device attributes :&lt;br /&gt;
==================================&lt;br /&gt;
SN = Z1Z2S2TL0000C4216E9V&lt;br /&gt;
Manufacturer Id = SEAGATE &lt;br /&gt;
Model Number = ST4000NM0023    &lt;br /&gt;
NAND Vendor = NA&lt;br /&gt;
WWN = 5000C50057DB2A28&lt;br /&gt;
Firmware Revision = 0003&lt;br /&gt;
Firmware Release Number = 03290003&lt;br /&gt;
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]&lt;br /&gt;
Coerced size = 3.637 TB [0x1d1b00000 Sectors]&lt;br /&gt;
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]&lt;br /&gt;
Device Speed = 6.0Gb/s&lt;br /&gt;
Link Speed = 6.0Gb/s&lt;br /&gt;
Write cache = N/A&lt;br /&gt;
Logical Sector Size = 512B&lt;br /&gt;
Physical Sector Size = 512B&lt;br /&gt;
Connector Name = Port 0 - 3 &amp;amp; Port 4 - 7 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== On ZFS machines ===&lt;br /&gt;
 $ zpool status&lt;br /&gt;
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk &#039;&#039;&#039;Read here&#039;&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
=== On Any Raid1 Configurations ===&lt;br /&gt;
Steps to fix a hard drive failure that is in a raid 1 configuration:&lt;br /&gt;
&lt;br /&gt;
The following demonstrates what a failed disk looks like:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat &amp;lt;br /&amp;gt;&lt;br /&gt;
  Personalities : [raid1] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sdb1[0] sda1[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/1] [U_]  &amp;lt;br/&amp;gt;&lt;br /&gt;
  md1 : active raid1 sdb2[0] sda2[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_] &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sdb3[0] sda3[2](F) &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_] &amp;lt;br/&amp;gt;    &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sda   &amp;lt;br/&amp;gt;              &lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Short INQUIRY response, skip product id   &amp;lt;br/&amp;gt;&lt;br /&gt;
  A mandatory SMART command failed: exiting. To continue, add one or more &#039;-T permissive&#039; options.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# smartctl -a /dev/sdb  &amp;lt;br/&amp;gt;&lt;br /&gt;
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF INFORMATION SECTION ===    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Model Family:     Seagate Barracuda 7200.10    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device Model:     ST3160815AS    &amp;lt;br/&amp;gt;&lt;br /&gt;
  Serial Number:    9RA6DZP8     &amp;lt;br/&amp;gt;&lt;br /&gt;
  Firmware Version: 4.AAB    &amp;lt;br/&amp;gt;&lt;br /&gt;
  User Capacity:    160,041,885,696 bytes [160 GB]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Sector Size:      512 bytes logical/physical   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Device is:        In smartctl database [for details use: -P show]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Version is:   7   &amp;lt;br/&amp;gt;&lt;br /&gt;
  ATA Standard is:  Exact ATA specification draft version not indicated   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Available - device has SMART capability.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  SMART support is: Enabled   &amp;lt;br/&amp;gt;&lt;br /&gt;
  === START OF READ SMART DATA SECTION ===   &amp;lt;br/&amp;gt; &lt;br /&gt;
  SMART overall-health self-assessment test result: PASSED   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There is a lot more that gets printed, but I cut it out.  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So /dev/sda has clearly failed.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the GOOD disk serial number so I leave that one in when I replace it:   &amp;lt;br/&amp;gt; &lt;br /&gt;
  Serial Number:    9RA6DZP8   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Mark and remove failed disk from raid:   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda1 faulty in /dev/md0   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2   &amp;lt;br/&amp;gt; &lt;br /&gt;
  mdadm: set /dev/sda2 faulty in /dev/md1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: set /dev/sda3 faulty in /dev/md2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: hot removed /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure grub is installed on the good disk and that grub.conf is updated:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# grub-install /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Installation finished. No error reported.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This is the contents of the device map /boot/grub/device.map.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Check if this is correct or not. &amp;lt;br/&amp;gt;&lt;br /&gt;
  If any of the lines is incorrect, fix it and re-run the script `grub-install&#039;.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  This device map was generated by anaconda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd0)     /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  (hd1)     /dev/sdb   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# vim /boot/grub/menu.lst  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Add fallback=1 right after default=0  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Go to the bottom section where you should find some kernel stanzas.   &amp;lt;br/&amp;gt;&lt;br /&gt;
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Should look like this:  &amp;lt;br/&amp;gt;&lt;br /&gt;
    [...]   &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd1,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
    title CentOS (2.6.18-128.el5)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    root (hd0,0)  &amp;lt;br/&amp;gt;&lt;br /&gt;
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  &amp;lt;br/&amp;gt;&lt;br /&gt;
    initrd /initrd-2.6.18-128.el5.img  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save and quit   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# init 0   &amp;lt;br/&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Swap the bad drive with the new drive and boot the machine.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once it&#039;s booted:   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Check the device names with cat /proc/mdstat and/or fisk -l.   &amp;lt;br/&amp;gt;&lt;br /&gt;
The newly installed drive on myServer was named /dev/sda.   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# modeprobe raid1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# modeprobe linear   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Copy the partitions from one disk to the other:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# sfdisk -l =&amp;gt; sanity check   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add the new disk to the raid array:&lt;br /&gt;
&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda1   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda2  &amp;lt;br/&amp;gt;&lt;br /&gt;
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
  mdadm: added /dev/sda3   &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sanity check:&lt;br /&gt;
  [root@myServer ~]# cat /proc/mdstat  &amp;lt;br/&amp;gt;&lt;br /&gt;
  Personalities : [raid1] [linear]    &amp;lt;br/&amp;gt;&lt;br /&gt;
  md0 : active raid1 sda1[1] sdb1[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  128384 blocks [2/2] [UU]   &amp;lt;br/&amp;gt;      &lt;br /&gt;
  md1 : active raid1 sda2[2] sdb2[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  16779776 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  [&amp;gt;....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec   &amp;lt;br/&amp;gt;&lt;br /&gt;
  md2 : active raid1 sda3[2] sdb3[0]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  139379840 blocks [2/1] [U_]   &amp;lt;br/&amp;gt;&lt;br /&gt;
  resync=DELAYED   &amp;lt;br/&amp;gt;     &lt;br /&gt;
  unused devices: &amp;lt;none&amp;gt;  &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That&#039;s it! :)   &amp;lt;br/&amp;gt;&lt;br /&gt;
[[ Category: Ben ]]&lt;br /&gt;
[[ Category : Sysadmin ]]&lt;br /&gt;
[[Category:Tutorials]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Mount_smallworld_disks&amp;diff=13192</id>
		<title>Mount smallworld disks</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Mount_smallworld_disks&amp;diff=13192"/>
		<updated>2021-01-11T23:18:08Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; yum install epel-release -y&lt;br /&gt;
 yum install ntfs-3g -y&lt;br /&gt;
then&lt;br /&gt;
 ls -ldt /dev/s* |head &lt;br /&gt;
to get recent new disks after plugging them in &lt;br /&gt;
&lt;br /&gt;
then &lt;br /&gt;
 mount /dev/sdap1 /mnt/dsk1&lt;br /&gt;
 mount /dev/sdaq1 /mnt/dsk2&lt;br /&gt;
&lt;br /&gt;
then, bob is your uncle.&lt;br /&gt;
&lt;br /&gt;
=== If on private network and not set up for NAT to internet, do this... ===&lt;br /&gt;
On CentOS 7 systems do:&lt;br /&gt;
 ifconfig&lt;br /&gt;
 *find which port is private&lt;br /&gt;
 nmtui&lt;br /&gt;
 *enter port settings&lt;br /&gt;
 *remove existing gateway&lt;br /&gt;
 *replace with 10.20.0.6&lt;br /&gt;
 *save and exit&lt;br /&gt;
 systemctl restart network&lt;br /&gt;
 systemctl restart NetworkConfig&lt;br /&gt;
&lt;br /&gt;
[[Category:Sysadmin]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Mount_smallworld_disks&amp;diff=13186</id>
		<title>Mount smallworld disks</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Mount_smallworld_disks&amp;diff=13186"/>
		<updated>2021-01-08T18:31:05Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; yum install epel-release -y&lt;br /&gt;
 yum install ntfs-3g -y&lt;br /&gt;
then&lt;br /&gt;
 ls -ldt /dev/s* |head &lt;br /&gt;
to get recent new disks after plugging them in &lt;br /&gt;
&lt;br /&gt;
then &lt;br /&gt;
 mount /dev/sdap1 /mnt/dsk1&lt;br /&gt;
 mount /dev/sdaq1 /mnt/dsk2&lt;br /&gt;
&lt;br /&gt;
then, bob is your uncle.&lt;br /&gt;
&lt;br /&gt;
=== If on private network and not set up for NAT to internet, do this... ===&lt;br /&gt;
On CentOS 7 systems do:&lt;br /&gt;
 ifconfig&lt;br /&gt;
 *find which port is private&lt;br /&gt;
 nmtui&lt;br /&gt;
 *enter port settings&lt;br /&gt;
 *remove existing gateway&lt;br /&gt;
 *replace with 10.20.0.6&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Sysadmin]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13183</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13183"/>
		<updated>2021-01-07T01:45:50Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Customizing Arthor Code to our needs ==&lt;br /&gt;
If Arthor Sever is launched through &amp;quot;java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort=&amp;lt;port&amp;gt;&amp;quot;, find the directory where this line of code was executed. Once found do &#039;&#039;&#039;ls -a&#039;&#039;&#039;, there should be a hidden directory called .extract.&lt;br /&gt;
=== Change Arthor Download Size (Hardcoded) ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?#arthor_sdf_link //search this&lt;br /&gt;
 *Look for 0!==arguments[0]?arguments[0]:&amp;lt;number&amp;gt;&lt;br /&gt;
 *Change number to desirec amount&lt;br /&gt;
=== Take out Similarity Button ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/index.html&lt;br /&gt;
 ?Similarity //search this&lt;br /&gt;
 *Comment out this line &#039;&#039;&#039;&amp;lt; li value=&amp;quot;Similarity&amp;quot; onclick=&amp;quot;setSearchType(this)&amp;quot; class=&amp;quot;first&amp;quot;&amp;gt; Similarity &amp;lt;/li &amp;gt;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;br /&gt;
=== Hyperlink to zinc20 ===&lt;br /&gt;
 vim .extract/webapps/ROOT/WEB-INF/static/js/index.js&lt;br /&gt;
 ?table_name //search this&lt;br /&gt;
 *find this line &amp;quot;&amp;lt; b&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt; /b&amp;gt;&amp;quot;&lt;br /&gt;
 *replace with &#039;&#039;&#039;&amp;quot;&amp;lt; b&amp;gt;&amp;lt;a target=&#039;_blank&#039; href=&#039;https://zinc20.docking.org/substances/&amp;quot;+h+&amp;quot;&#039;&amp;gt;&amp;quot; + h + &amp;quot;&amp;lt;/a&amp;gt;&amp;lt;/b &amp;gt;&amp;quot;&#039;&#039;&#039; //added spaces at the beginning and end so prevent wiki from converting it&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13182</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13182"/>
		<updated>2021-01-07T01:04:02Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Configuration Details */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
Check Arthor manual for more configuration options.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13181</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13181"/>
		<updated>2021-01-07T01:03:23Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Further Arthor Optimizations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13180</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13180"/>
		<updated>2021-01-07T01:02:45Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Setting environment variables for an Arthor Server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
=== Configuration Details ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
*Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13179</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13179"/>
		<updated>2021-01-07T00:33:48Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Setting environment variables for TomCat Server */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for an Arthor Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   BinDir=/opt/nextmove/arthor/arthor-3.3-centos7/bin&lt;br /&gt;
   DataDir=/local2/arthor_local_8081/&lt;br /&gt;
   MaxConcurrentSearches=6&lt;br /&gt;
   MaxThreadsPerSearch=8&lt;br /&gt;
   AutomaticIndex=false&lt;br /&gt;
   AsyncHitCountMax=1000000&lt;br /&gt;
   Resolver=https://sw.docking.org/util/smi2mol?smi=%s&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BinDir&#039;&#039;&#039;: is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DataDir&#039;&#039;&#039;: This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches&#039;&#039;&#039;: Controls the maximum number of searches that can be run concurrently by setting the database pool size. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxThreadsPerSearch&#039;&#039;&#039;: The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax&#039;&#039;&#039;: The upper-bound for the number of hits to retrieve in background searches.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Resolver&#039;&#039;&#039;: Using Smallworld API, allows input box to take in a SMILE format and automatically draw on the board.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13178</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13178"/>
		<updated>2021-01-06T23:55:05Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* How to run standalone Arthor instance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.3-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13177</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13177"/>
		<updated>2021-01-06T22:58:03Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices), zinc22_2d (H30, 1 slice)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13176</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13176"/>
		<updated>2021-01-06T22:52:40Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor (local 8081) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Local 8081 (Datasets all local to samekh/nun)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-an, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13175</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13175"/>
		<updated>2021-01-06T21:06:38Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (af-am, 8 slices), zinc22_2d (H04~H25, 22 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-ae, 5 slices), zinc22_2d (H25~H29, 4 slices)&lt;br /&gt;
| 3.7TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an~az, 13 slices)&lt;br /&gt;
| 5.6TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13174</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13174"/>
		<updated>2021-01-06T20:19:51Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| 2.4TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices)&lt;br /&gt;
| 738GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 4.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13173</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13173"/>
		<updated>2021-01-06T01:44:16Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (aa-ax, 24 slices), Enamine_REAL_Q2-2020-S-13B (aa-ab, 2 slices)&lt;br /&gt;
| 3.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices)&lt;br /&gt;
| 738GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 2.5TB (soft linked) + 2.0TB = 4.5TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13172</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13172"/>
		<updated>2021-01-06T01:43:15Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (aa-ax, 24 slices), Enamine_REAL_Q2-2020-S-13B (aa-ab, 2 slices)&lt;br /&gt;
| 3.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices)&lt;br /&gt;
| 738GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-am, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an-az, 13 slices)&lt;br /&gt;
| 5.0TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-20&lt;br /&gt;
| 10.20.1.20:8008&lt;br /&gt;
| 12, 15, 16 ,27, 27, 30, 36&lt;br /&gt;
| 897GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-35&lt;br /&gt;
| 10.20.5.35:8008&lt;br /&gt;
| 2, 3, 8, 10, 34, 44_results, 44_results2, all-zinc-xad, in-stock-40, on-demand&lt;br /&gt;
| 1.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 2.5TB (soft linked) + 2.0TB = 4.5TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13171</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13171"/>
		<updated>2021-01-06T01:26:42Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 05, 2021&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat (Method 1)==&lt;br /&gt;
Arthor ran on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Disk Space Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Public Arthor===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-9-22&lt;br /&gt;
| 10.20.9.22:8000&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/db4/public_arthor/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (am-ax, 12 slices), Enamine_REAL_Q2-2020-S-13B (aa-ab, 2 slices)&lt;br /&gt;
| 2.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices)&lt;br /&gt;
| 738GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-16&lt;br /&gt;
| 10.20.1.16:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (aa-al, 12 slices)&lt;br /&gt;
| 2.0TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-am, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an-az, 13 slices)&lt;br /&gt;
| 5.0TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| qof&lt;br /&gt;
| 10.20.9.29:8008&lt;br /&gt;
| 18, 25, 37, 45, 5&lt;br /&gt;
| 173GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/ex9/work/btingle/auto_atdb/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| lamed&lt;br /&gt;
| 10.20.9.15: 8008&lt;br /&gt;
| 17, 29, 40, 6&lt;br /&gt;
| 512GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/ex6/work/btingle/auto_atdb&lt;br /&gt;
| not active&lt;br /&gt;
|- &lt;br /&gt;
| n-1-20&lt;br /&gt;
| 10.20.1.20:8008&lt;br /&gt;
| 12, 15, 16 ,27, 27, 30, 36&lt;br /&gt;
| 897GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-34&lt;br /&gt;
| 10.20.5.34:8008&lt;br /&gt;
| 9, 19, 21, 28, 31, 42, all-zinc-xab, all-zinc-xafn in-stock&lt;br /&gt;
| 875GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-35&lt;br /&gt;
| 10.20.5.35:8008&lt;br /&gt;
| 2, 3, 8, 10, 34, 44_results, 44_results2, all-zinc-xad, in-stock-40, on-demand&lt;br /&gt;
| 1.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 2.5TB (soft linked) + 2.0TB = 4.5TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13170</id>
		<title>Arthor Documentation for Future Developer</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Arthor_Documentation_for_Future_Developer&amp;diff=13170"/>
		<updated>2021-01-06T01:18:28Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: /* Arthor Round Table Head */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Written by Jennifer Young on December 16, 2019. Last edited January 30, 2020&lt;br /&gt;
&lt;br /&gt;
==Install and Set Up on TomCat==&lt;br /&gt;
Arthor currently runs on n-1-136, which runs CentOS Linux release 7.7.1908 (Core).  You can check the version of CentOS with the following command&lt;br /&gt;
     cat /etc/centos-release&lt;br /&gt;
&lt;br /&gt;
Check your current version of Java with the following command:&lt;br /&gt;
    java -version&lt;br /&gt;
&lt;br /&gt;
On n-1-136 we are running openjdk version &amp;quot;1.8.0_222&amp;quot;, OpenJDK Runtime Environment (build 1.8.0_222-b10), and OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)&lt;br /&gt;
If Java is not installed, install it using yum&lt;br /&gt;
&lt;br /&gt;
==See this wiki page for more detailed information about installing Tomcat on our cluster==&lt;br /&gt;
http://wiki.docking.org/index.php/Tomcat_Installation&lt;br /&gt;
&lt;br /&gt;
==Open port for Arthor==&lt;br /&gt;
In order for Arthor to be usable in the browser, the port you wish to run it on must be opened.&lt;br /&gt;
https://www.thegeekdiary.com/how-to-open-a-ports-in-centos-rhel-7/&lt;br /&gt;
&lt;br /&gt;
===Step 1: Check Port Status===&lt;br /&gt;
Check that the port is not open and that Apache is not showing that port. &lt;br /&gt;
    netstat -na | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    lsof -i -P |grep http&lt;br /&gt;
&lt;br /&gt;
===Step 2: Check Port Status in IP Tables===&lt;br /&gt;
    iptables-save | grep &amp;lt;port number you are checking&amp;gt;&lt;br /&gt;
&lt;br /&gt;
I skipped Step 3 from the guide, because there was a lot of information in the /etc/services file and I didn&#039;t want to edit it and break something.&lt;br /&gt;
&lt;br /&gt;
===Step 4: Open Firewall Ports===&lt;br /&gt;
I did not include the zone=public section because the stand-alone servers are usually used for private instances of Arthor and SmallWorld.&lt;br /&gt;
Run as root.&lt;br /&gt;
    firewall-cmd --add-port=&amp;lt;port number you are adding&amp;gt;/tcp --permanent&lt;br /&gt;
&lt;br /&gt;
You need to reload the firewall after a change is made.&lt;br /&gt;
    firewall-cmd --reload&lt;br /&gt;
&lt;br /&gt;
===Step 5: Check that port is working===&lt;br /&gt;
To check that the port is active, run.&lt;br /&gt;
    iptables -nL&lt;br /&gt;
&lt;br /&gt;
You should see something along the lines of: &lt;br /&gt;
    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:&amp;lt;port number you&#039;re adding&amp;gt; ctstate NEW,UNTRACKED&lt;br /&gt;
&lt;br /&gt;
==How to run standalone Arthor instance==&lt;br /&gt;
&lt;br /&gt;
===Step 1: Use or start a bash shell===&lt;br /&gt;
You can check your default shell using&lt;br /&gt;
    echo $SHELL&lt;br /&gt;
&lt;br /&gt;
If your default shell is csh, use &lt;br /&gt;
    bash&lt;br /&gt;
to start a new bash shell in the current terminal window.  Note that echo $SHELL will show you your default shell regardless of the current shell.&lt;br /&gt;
&lt;br /&gt;
===Step 2: Set your environment variables===&lt;br /&gt;
    export ARTHOR_DIR=/opt/nextmove/arthor/arthor-3.1-centos7&lt;br /&gt;
    export PATH=$ARTHOR_DIR/bin/:$PATH&lt;br /&gt;
&lt;br /&gt;
Make sure the ARTHOR_DIR variable is set to the directory for the latest version of Arthor or whichever version you would like to test.&lt;br /&gt;
The PATH environment variable is needed if you wish to use the Arthor tools from the command line&lt;br /&gt;
&lt;br /&gt;
===Step 3: Run the arthor-server.jar===&lt;br /&gt;
    java -jar /opt/nextmove/arthor/arthor-3.0-rt-beta-linux/java/arthor-server.jar --httpPort &amp;lt;your httpPort&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Setting environment variables for TomCat Server==&lt;br /&gt;
Set the environment variables in the setenv.sh file.  Note: Be sure to edit the file in the directory corresponding to the latest version of TomCat.  As of December 2019, we are running 9.0.27 on n-1-136.&lt;br /&gt;
&lt;br /&gt;
   vim  /opt/tomcat/apache-tomcat-9.0.27/bin/setenv.sh&lt;br /&gt;
&lt;br /&gt;
Add the line below to the setenv.sh file above, or substitute the path to wherever you currently store the arthor.cfg file&lt;br /&gt;
   export ARTHOR_CONFIG=/usr/local/tomcat/arthor.cfg&lt;br /&gt;
&lt;br /&gt;
Here is an example of the arthor.cfg file:&lt;br /&gt;
   # Arthor generated config file&lt;br /&gt;
   BINDIR=/opt/nextmove/arthor/arthor-2.1.2-centos7/bin&lt;br /&gt;
   DATADIR=/usr/local/tomcat/arthor_data&lt;br /&gt;
   STAGEDIR=/usr/local/arthor_data/stage&lt;br /&gt;
   NTHREADS=64 . &lt;br /&gt;
   NODEAFFINITY=true&lt;br /&gt;
   SearchAsYouDraw=true&lt;br /&gt;
   AutomaticIndex=true&lt;br /&gt;
   DEPICTION=./depict/bot/svg?w=%w&amp;amp;h=%h&amp;amp;svgunits=px&amp;amp;smi=%s&amp;amp;zoom=0.8&amp;amp;sma=%m&amp;amp;smalim=1&lt;br /&gt;
   RESOLVER=&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Important parts of the arthor.cfg file&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;BINDIR&#039;&#039;&#039; is the location of the Arthor command line binaries.  These are used to generate the Arthor index files and to perform searches directly on n-1-136.  An example of this would be using atdbgrep for substructure search. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DATADIR&#039;&#039;&#039; This is the directory where the Arthor data files live.  Location where the index files will be created and loaded from.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;STAGEDIR&#039;&#039;&#039; Location where the index files will be built before being moved into the DATADIR.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NTHREADS&#039;&#039;&#039; The number of threads to use for both ATDB and ATFP searches&lt;br /&gt;
&lt;br /&gt;
Set &#039;&#039;&#039;AutomaticIndex&#039;&#039;&#039; to false if you don&#039;t want new smiles files added to the data directory to be indexed automatically&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Before working with Arthor, it is recommended that you familiarize yourself with the Arthor documentation. Some useful pages to look at include 3-5, 22-25 and 33-39. Of course, reading everything would be the best!&lt;br /&gt;
&lt;br /&gt;
==Checking Memory Usage==&lt;br /&gt;
Before building arthor indexes, it&#039;s always a good thing to check what percent of the memory is being used. Try to be cautious with how much memory you have left, and make sure to check while building indexes to make sure that you have enough space. To check, run the following command:&lt;br /&gt;
&lt;br /&gt;
   df -h /&amp;lt;directory with disc&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Building Large Databases==&lt;br /&gt;
At the moment, we are building databases of size 500M molecules by merging smile files. There are multiple methods of trying to create large databases, one being merging based off of the same H?? prefix and stopping once the database reaches &amp;gt; 500M molecules (or whatever upperbound you want to use). Here is some python code that simulates this merging process. Essentially the program takes all of the .smi files within an input directory, sorts them lexiographically, and begins merging these .smi files together in order until the size reaches &amp;gt; 500M molecules.&lt;br /&gt;
   &lt;br /&gt;
Feel free to modify it if you think a better method exists.&lt;br /&gt;
   &lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os                                                                                                                                                                           &lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path to directory holding .smi files&amp;gt;&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   onlyfiles.sort()&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   cur_mols = 0&lt;br /&gt;
   lower_bound = 500000000&lt;br /&gt;
   upper_bound = 600000000&lt;br /&gt;
   files_to_merge = []&lt;br /&gt;
   &lt;br /&gt;
   def merge_files(f_t_m):&lt;br /&gt;
      arr = f_t_m[0].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      arr2 = f_t_m[len(f_t_m) - 1].split(&amp;quot;.&amp;quot;)&lt;br /&gt;
      file_name_merge = (arr[0] + &amp;quot;_&amp;quot; + arr2[0] + &amp;quot;.smi&amp;quot;)&lt;br /&gt;
      print (&amp;quot;File being created: &amp;quot; + file_name_merge)&lt;br /&gt;
   &lt;br /&gt;
      for file in f_t_m:&lt;br /&gt;
         tmp = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;cat &amp;quot; + join(mypath, file) + &amp;quot; &amp;gt;&amp;gt; &amp;quot; + file_name_merge, shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         print(&amp;quot;Working with &amp;quot; + file)&lt;br /&gt;
         mol = sum(1 for line in open(join(mypath, file)))&lt;br /&gt;
         print(file, mol, cur_mols)&lt;br /&gt;
   &lt;br /&gt;
         if (cur_mols + mol &amp;gt; lower_bound):&lt;br /&gt;
            if (cur_mols + mol &amp;lt; upper_bound):&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
            else:&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
               files_to_merge.append(file)&lt;br /&gt;
               merge_files(files_to_merge)&lt;br /&gt;
               cur_mols = 0&lt;br /&gt;
               files_to_merge.clear()&lt;br /&gt;
         else:&lt;br /&gt;
            cur_mols += mol&lt;br /&gt;
            files_to_merge.append(file)&lt;br /&gt;
   &lt;br /&gt;
   if (len(files_to_merge) != 0):&lt;br /&gt;
      merge_files(files_to_merge)&lt;br /&gt;
&lt;br /&gt;
==Building Arthor Indexes==&lt;br /&gt;
Once you&#039;ve merged the .smi files together, it&#039;s time to start building the databases themselves. To do this we use the command &lt;br /&gt;
&lt;br /&gt;
   smi2atdb -j 0 -p &amp;lt;The .smi file&amp;gt; &amp;lt;The .atdb&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The flag &amp;quot;-j 0&amp;quot; enables parallel generation and utilizes all available processors to generate the .atdb file. The &amp;quot;-p&amp;quot; flag stores the offset position in the ATDB file. Since we&#039;re building indexes for the Web Application, you must use the &amp;quot;-p&amp;quot; flag when building indexes. Please note that the name of the .smi file should also be the name of the .atdb file. That way, the Web Application knows to use these files together and correctly display the required images. Refer to pages 33-34 in the Arthor documentation for more information.&lt;br /&gt;
&lt;br /&gt;
If there are too many large .smi files and you do not want to manually build each .atdb file, you can use this python script which takes all of the .smi files in the current directory and converts them to .atdb files. Make sure to modify mypath to the directory containing the .smi files. You can change the variable &amp;quot;create_fp&amp;quot; to false if you don&#039;t want to create .atdb.fp files (refer to page 9 in the Arthor documentation).&lt;br /&gt;
&lt;br /&gt;
   import subprocess&lt;br /&gt;
   import sys&lt;br /&gt;
   import os&lt;br /&gt;
   &lt;br /&gt;
   from os import listdir&lt;br /&gt;
   from os.path import isfile, join&lt;br /&gt;
   &lt;br /&gt;
   mypath = &amp;quot;&amp;lt;Path containing the .smi files&amp;quot;&lt;br /&gt;
   onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]&lt;br /&gt;
   &lt;br /&gt;
   create_fp = True&lt;br /&gt;
   &lt;br /&gt;
   for file in onlyfiles:&lt;br /&gt;
      arr = file.split(&amp;quot;.&amp;quot;)&lt;br /&gt;
   &lt;br /&gt;
      if (arr[len(arr) - 1] == &amp;quot;smi&amp;quot;):&lt;br /&gt;
         process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/smi2atdb -j 0 -p {0} {1}.atdb&amp;quot;.format(join(mypath, file), arr[0]), shell=True)&lt;br /&gt;
         process.wait()&lt;br /&gt;
   &lt;br /&gt;
         print(&amp;quot;SUCCESS! {0}.atdb file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
   &lt;br /&gt;
         if (create_fp):&lt;br /&gt;
            process = subprocess.Popen(&amp;quot;/nfs/ex9/work/xyz/psql/arthor-3.3-centos7/bin/atdb2fp -j 0 {0}.atdb&amp;quot;.format(arr[0]), shell=True)&lt;br /&gt;
            process.wait()&lt;br /&gt;
      &lt;br /&gt;
            print(&amp;quot;SUCCESS! {0}.atdb.fp file was created!&amp;quot;.format(arr[0]))&lt;br /&gt;
&lt;br /&gt;
==Uploading Indexes to the Web Application==&lt;br /&gt;
One can upload indexes to the Web Application by changing the &amp;quot;DATADIR&amp;quot; variable in the arthor.cfg file to the directory holding the .atdb files. This is already set up on n-1-136 and n-5-34.&lt;br /&gt;
   &lt;br /&gt;
==Further Arthor Optimizations==&lt;br /&gt;
The following edits can be made to the arthor.cfg to optimize substructure search queries. More information can be found in pages 6-8 in the Arthor Documentation file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NodeAffinity NUMA&#039;&#039;&#039;: optimized flag, pin processing to specific CPU sets to where the data is located in memory. There is a small start-up cost and is most useful for long running services (see Non-Uniform Memory Access (NUMA))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountAllowed=true|false&#039;&#039;&#039; After fetching a page from a substructure or formula search the server will spin off a background process to count the total number of hits. This can be resource intensive for large databases and may not be desirable for servers under heavy load and may not even be needed. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;AsyncHitCountMax=#&#039;&#039;&#039; The upper-bound for the number of hits to retrieve in background searches. If very generic queries are issued (e.g. benzene or methane) hundred’s of millions of hits may be counted. Setting this value to anything other than zero (e.g. 10,000) will stop the background search if it exceeds this limit. Note some pathological queries may find very few hits but still end up looking at everything. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MaxConcurrentSearches=#&#039;&#039;&#039; Controls the maximum number of searches that can be run concurrently by setting the database pool size. The searches may be on the same or different databases. If a search comes in and the pool is full it will have to wait for another search to finish - this increases the request time. &lt;br /&gt;
&lt;br /&gt;
Typically if each search is using all the processing cores on a machine then additional searches will run at 1/Nth the speed. If the request time is substantially larger that the search time the request had to wait for resources to become available. When switching between a large number of databases it can be useful to have a larger pool size, the only trade off is keeping file pointers open.  Default: 6 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Binary Fingerprint Folding&#039;&#039;&#039; Arthor uses binary circular fingerprints (ECFP4/radius=2) for similarity. When creating an ATFP index you can specify how large to make your fingerprints. Circular fingerprints are sparser than path based fingerprints (e.g. Daylight) and so can be folded smaller without too much degradation in performance. Folding can significantly reduce the footprint size of a database and improve search speeds. A 256-bit fingerprint takes up 1/4 of the space of 1024-bit and can therefore be traversed 4x faster. &lt;br /&gt;
&lt;br /&gt;
This is more important for very large databases with billions of compounds, in such instances a minor drop in precision is likely tolerable as ultimately all that happens is some hits may swap places in the hit list. &lt;br /&gt;
&lt;br /&gt;
==Virtual Memory==&lt;br /&gt;
In addition to modifying the arthor.cfg file, virtual memory can also be used to make queries faster. There can still be More information can be found in pages 10-16 in the Arthor Documentation.&lt;br /&gt;
&lt;br /&gt;
==Setting up Round Table==&lt;br /&gt;
This is a new feature in Arthor 3.0 and is currently beta (January 2020). See section 2.4 in the manual&lt;br /&gt;
As explained in the manual, &amp;quot;Round Table allows you to serve and split chemical searches across multiple host machines.  The implementation provides a lightweight proxy that forwards requests to other Arthor host servers that do the actual search.  Communication is done using the existing Web APIs.&lt;br /&gt;
&lt;br /&gt;
Since Arthor requires CentOS 7, as of January 2020 we have 6 servers that are capable of running Arthor with Round Table.&lt;br /&gt;
&lt;br /&gt;
===Setting up Host Server===&lt;br /&gt;
If we want to add machines to the Round Table, for example &#039;nun&#039; and &#039;samekh&#039;, we need to edit their arthor.cfg file so that when our Local Machine passes commands these secondary servers know to perform the search they are given.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   MaxThreadsPerSearch=4 &lt;br /&gt;
   AutomaticIndex=false &lt;br /&gt;
   DATADIR=&amp;lt;Directory where smiles are located&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We then run the jar server on each of these host machines containing data on any available port. &lt;br /&gt;
&lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For our local machine, the arthor.cfg file will look different.&lt;br /&gt;
&lt;br /&gt;
   $ cat arthor.cfg&lt;br /&gt;
   [RoundTable] &lt;br /&gt;
   RemoteClient=http://skynet:&amp;lt;port number where jar server is running&amp;gt;/ &lt;br /&gt;
   RemoteClient=http://hal:&amp;lt;port number where jar server is running&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
Please refer to Section 2 in the RoundTable Documentation file (pages 6-8) for more useful information on configuration.&lt;br /&gt;
&lt;br /&gt;
Then run the following command on n-1-136:&lt;br /&gt;
 &lt;br /&gt;
   java -jar /nfs/ex9/work/xyz/psql/arthor-3.3-centos7/java/arthor.jar --httpPort &amp;lt;port&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Head===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
! Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8080&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-41B&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor Round Table Nodes===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (am-ax, 12 slices), Enamine_REAL_Q2-2020-S-13B (aa-ab, 2 slices)&lt;br /&gt;
| 2.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices)&lt;br /&gt;
| 738GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-16&lt;br /&gt;
| 10.20.1.16:8008&lt;br /&gt;
| Enamine_REAL_Q2-2020-M-13B (aa-al, 12 slices)&lt;br /&gt;
| 2.0TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-1-17&lt;br /&gt;
| 10.20.1.17:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (aa-am, 14 slices)&lt;br /&gt;
| 4.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-32&lt;br /&gt;
| 10.20.5.32:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (an-az, 13 slices)&lt;br /&gt;
| 5.0TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-33&lt;br /&gt;
| 10.20.5.33:8008&lt;br /&gt;
| Enamine_REAL_Space_June_2020_M41B (ba-bl, 12 slices)&lt;br /&gt;
| 5.3TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| qof&lt;br /&gt;
| 10.20.9.29:8008&lt;br /&gt;
| 18, 25, 37, 45, 5&lt;br /&gt;
| 173GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/ex9/work/btingle/auto_atdb/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| lamed&lt;br /&gt;
| 10.20.9.15: 8008&lt;br /&gt;
| 17, 29, 40, 6&lt;br /&gt;
| 512GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /export/ex6/work/btingle/auto_atdb&lt;br /&gt;
| not active&lt;br /&gt;
|- &lt;br /&gt;
| n-1-20&lt;br /&gt;
| 10.20.1.20:8008&lt;br /&gt;
| 12, 15, 16 ,27, 27, 30, 36&lt;br /&gt;
| 897GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-34&lt;br /&gt;
| 10.20.5.34:8008&lt;br /&gt;
| 9, 19, 21, 28, 31, 42, all-zinc-xab, all-zinc-xafn in-stock&lt;br /&gt;
| 875GB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
| n-5-35&lt;br /&gt;
| 10.20.5.35:8008&lt;br /&gt;
| 2, 3, 8, 10, 34, 44_results, 44_results2, all-zinc-xad, in-stock-40, on-demand&lt;br /&gt;
| 1.5TB&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_database/&lt;br /&gt;
| not active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Arthor (local 8081)===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! CentOS 7 Machine&lt;br /&gt;
! Port&lt;br /&gt;
! Database&lt;br /&gt;
! Total Files Size&lt;br /&gt;
! Arthor Install Location&lt;br /&gt;
! Round Table Data Directory&lt;br /&gt;
!Active&lt;br /&gt;
|-&lt;br /&gt;
| samekh&lt;br /&gt;
| 10.20.0.41:8081&lt;br /&gt;
| Enamine_REAL_Q2-2020-All-13B (26 slices)&lt;br /&gt;
| 2.5TB (soft linked) + 2.0TB = 4.5TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
| nun&lt;br /&gt;
| 10.20.0.40:8081&lt;br /&gt;
| Enamine_REAL_Space_June_2020_S41B (aa-ae, 5 slices), Enamine_REAL_Space_June_2020_M41B (aa-ar)&lt;br /&gt;
| 738GB (soft linked) + 1.9TB = 2.6TB total&lt;br /&gt;
| /opt/nextmove/arthor/arthor-3.3-centos7/&lt;br /&gt;
| /local2/arthor_local_8081/&lt;br /&gt;
| active&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
	<entry>
		<id>http://wiki.docking.org/index.php?title=Manage_Lab_Websites&amp;diff=13169</id>
		<title>Manage Lab Websites</title>
		<link rel="alternate" type="text/html" href="http://wiki.docking.org/index.php?title=Manage_Lab_Websites&amp;diff=13169"/>
		<updated>2021-01-05T20:51:59Z</updated>

		<summary type="html">&lt;p&gt;Jgutie11: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== List of Websites ==&lt;br /&gt;
Last updated on 01/05/2021&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Website&lt;br /&gt;
!Machine:Port&lt;br /&gt;
!Run on&lt;br /&gt;
!Hosted in (httpd/conf.d)&lt;br /&gt;
!Working&lt;br /&gt;
|-&lt;br /&gt;
|Amis: https://amis.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Arthor: https://arthor.docking.org/&lt;br /&gt;
|nun, samekh, n-9-22; Port 8000&lt;br /&gt;
|Screen&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|BKSLab: http://www.bkslab.org/&lt;br /&gt;
|gimel:5002&lt;br /&gt;
|Supervisord&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Blaster: https://blaster.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Cartblanche: https://cartblanche.docking.org/&lt;br /&gt;
|gimel:5066-5067&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Covalent: http://covalent.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|DSF: https://dsf.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|No (wrong website)&lt;br /&gt;
|-&lt;br /&gt;
|Duc: https://duc.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|Dud: https://dud.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Dude18: http://dude18.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|Dude: http://dude.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Dudez: http://dudez.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Excipients: http://excipients.docking.org/&lt;br /&gt;
|gimel:8093&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Gitlab: https://gitlab.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|HG: https://hg.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|IrwinLab: http://irwinlab.compbio.ucsf.edu/&lt;br /&gt;
|gimel:5004&lt;br /&gt;
|Supervisord&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Khanh: http://khanh.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Metabolite: http://metabolite.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|showing wrong website&lt;br /&gt;
|-&lt;br /&gt;
|Deepchemworkshop: http://deepchemworkshop.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Prices: http://prices.docking.org/&lt;br /&gt;
|gimel:5022&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|Psicquic: http://psicquic.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes (broken???)&lt;br /&gt;
|-&lt;br /&gt;
|Reactor: http://reactor.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Transporters: http://transporters.ucsf.bkslab.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Sea16: http://sea16.docking.org/&lt;br /&gt;
|gimel:8086&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|SEC: http://sec.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Smallworld Public: http://sw.docking.org/&lt;br /&gt;
|abacus:5020&lt;br /&gt;
|screen&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Smallworld Private: http://swp.docking.org/&lt;br /&gt;
|abacus:8080&lt;br /&gt;
|screen&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Smallworld Super Private: http://swc.docking.org/&lt;br /&gt;
|abacus:5099&lt;br /&gt;
|screen&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Stats: http://stats.docking.org/ &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|Symp: http://symp.docking.org/ &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|TLDR http://tldr.docking.org/&lt;br /&gt;
|gimel:5011 &lt;br /&gt;
|Supervisord&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Tool-Selector: http://tool-selector.ucsf.bkslab.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Transportal: http://transportal.docking.org/&lt;br /&gt;
|n-9-22 : port 8123&lt;br /&gt;
|screen&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Upload: http://upload.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Zinc12: http://zinc12.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Tau (169.230.26.43, can not be moved to files2)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|Zinc15: http://zinc.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|Zinc20: http://zinc20.docking.org/&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|Files2 (169.230.75.3)&lt;br /&gt;
|Yes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Restart Instructions after UPS==&lt;br /&gt;
=== For websites running on Screen ===&lt;br /&gt;
* Become khtang on gimel&lt;br /&gt;
 $ screen -ls&lt;br /&gt;
&lt;br /&gt;
 21546.SEA	(Detached)&lt;br /&gt;
 23528.transportal	(Detached)&lt;br /&gt;
 22877.conference	(Detached)&lt;br /&gt;
 16132.upload	(Detached)&lt;br /&gt;
 24786.oeb_lib_building	(Detached)&lt;br /&gt;
 4796.ShopAppMol	(Detached)&lt;br /&gt;
 19162.FASTROCS-server	(Detached)&lt;br /&gt;
&lt;br /&gt;
 $ screen -r &amp;lt;screen_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== SEA ====&lt;br /&gt;
Become www on gimel2&lt;br /&gt;
 $ cd /nfs/soft/www/apps/seadev/&lt;br /&gt;
 On bash shell&lt;br /&gt;
 $ source tools/anaconda2/bin/activate sea16&lt;br /&gt;
 $ cd work/seaware/seaware-academic&lt;br /&gt;
 $ make SEAserver-stop-devel&lt;br /&gt;
 $ make all&lt;br /&gt;
 $ make SEAserver-start-devel&lt;br /&gt;
&lt;br /&gt;
==== Transportal ====&lt;br /&gt;
 $ cd  /mnt/nfs/soft/www/apps/transportal/src/transportal&lt;br /&gt;
 $ source venv/bin/activate.csh&lt;br /&gt;
 $ python manage.py runserver 0.0.0.0:8123&lt;br /&gt;
&lt;br /&gt;
=== For websites running on Supervisord ===&lt;br /&gt;
* Become root on server the website runs on&lt;br /&gt;
 $ supervisorctl status&lt;br /&gt;
 $ supervisorctl restart &amp;lt;name&amp;gt;&lt;br /&gt;
   i.e &lt;br /&gt;
 $ supervisorctl restart bks-lab&lt;br /&gt;
 $ supervisorctl restart tools18&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category : Khanh]]&lt;/div&gt;</summary>
		<author><name>Jgutie11</name></author>
	</entry>
</feed>