Sea16 restart: Difference between revisions

From DISI
Jump to navigation Jump to search
(asdf)
No edit summary
 
(20 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<pre>
<pre>
# restart conda_sea16 server
  screen -r sea16 on gimel
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  # In particular this needs the path to 'svgo' to optimize images of compounds to be sent to client
  source activate sea16
  sh SEAserver/scripts/stop-sea-server.sh production
  sh SEAserver/scripts/run-sea-server.sh production
# update conda_sea16
  ssh s_enkhee@gimel
  su - www
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  source activate sea16
  git pull
  git submodule update --init --recursive
  kill all sea-server related processes e.g. from htop
  make clean
  delete sea related libraries (fitcore, fpcore, libcore, seacore, seashell, seaserver) from your site-packages folder under the conda env
  rm -rf /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/lib/python2.7/site-packages/seaserver
  make all
  make SEAserver-start-production 
# run the test:
    -> export SEA_APP_ROOT=$CONDA_PREFIX/var/seaserver
    -> export SEA_RUN_FOLDER=$SEA_APP_ROOT/run
    -> export SEA_DATA_FOLDER=$SEA_APP_ROOT/data
    ->  python -m unittest test.test_illustrate
# Redis config
    As for the Redis warnings, these are not new, but probably something worth taking care of (https://redis.io/topics/admin):
    add 'vm.overcommit_memory = 1' to /etc/sysctl.conf
    From here: The Linux kernel will always overcommit memory, and never check if enough memory is available. This increases the risk of out-of-memory situations, but also improves memory-intensive workloads.
    run echo never > /sys/kernel/mm/transparent_hugepage/enabled
    add 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' to /etc/rc.local
 
    From here: Latency induced by transparent huge pages
    Unfortunately when a Linux kernel has transparent huge pages enabled, Redis incurs to a big latency penalty after the fork call is used in order to persist on disk. Huge pages are the cause of the following issue:
    Fork is called, two processes with shared huge pages are created.
    In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory.
    This will result in big latency and big memory usage.
    Make sure to disable transparent huge pages using the following command:
    echo never > /sys/kernel/mm/transparent_hugepage/enabled
 
 
# update sea16
# update sea16
ssh xyz@gimel
ssh xyz@gimel
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
source ../../env.csh
source ../../env.csh
git reset --hard HEAD
git pull
git pull
rm -rf /nfs/soft/www/apps/sea/sea16/lib/python2.7/site-packages/seaserver
rm -rf /nfs/soft/www/apps/sea/sea16/lib/python2.7/site-packages/seaserver
Line 11: Line 76:


# restart server
# restart server
ssh www@n-1-110
ssh www@n-1-110
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
Line 16: Line 82:
cd SEAsever
cd SEAsever
sh scripts/run-sea-server.sh
sh scripts/run-sea-server.sh
# restart server
ssh <superuser>@n-1-110
sudo -i
screen -r
screen -dR Sea (switch to sea screen)
sh scripts/run-sea-server.sh
# how to save the old queue data
cd /nfs/soft/www/apps/sea/sea16/var/seaserver/queue
mv jobs jobs.save
mv tasks.sqlite tasks.sqllite.save
restart sea server on n-1-110. 
(basically, it had too much history and that was what was slowing it down)
(do on the first day of the month and rename the old one to a month version)


</pre>
</pre>


[[Category:Curators]]
[[Category:Curator]][[Category:SEA]]

Latest revision as of 17:34, 11 October 2019



# restart conda_sea16 server

  screen -r sea16 on gimel
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  # In particular this needs the path to 'svgo' to optimize images of compounds to be sent to client

  source activate sea16 
  sh SEAserver/scripts/stop-sea-server.sh production
  sh SEAserver/scripts/run-sea-server.sh production


# update conda_sea16

  ssh s_enkhee@gimel
  su - www
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  source activate sea16
  git pull
  git submodule update --init --recursive
  kill all sea-server related processes e.g. from htop
  make clean
  delete sea related libraries (fitcore, fpcore, libcore, seacore, seashell, seaserver) from your site-packages folder under the conda env
  rm -rf /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/lib/python2.7/site-packages/seaserver 
  make all
  make SEAserver-start-production  

# run the test:
     -> export SEA_APP_ROOT=$CONDA_PREFIX/var/seaserver
     -> export SEA_RUN_FOLDER=$SEA_APP_ROOT/run
     -> export SEA_DATA_FOLDER=$SEA_APP_ROOT/data
     ->  python -m unittest test.test_illustrate

 
# Redis config

    As for the Redis warnings, these are not new, but probably something worth taking care of (https://redis.io/topics/admin):

    add 'vm.overcommit_memory = 1' to /etc/sysctl.conf
 
    From here: The Linux kernel will always overcommit memory, and never check if enough memory is available. This increases the risk of out-of-memory situations, but also improves memory-intensive workloads.

    run echo never > /sys/kernel/mm/transparent_hugepage/enabled
    add 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' to /etc/rc.local
  
    From here: Latency induced by transparent huge pages

    Unfortunately when a Linux kernel has transparent huge pages enabled, Redis incurs to a big latency penalty after the fork call is used in order to persist on disk. Huge pages are the cause of the following issue:

    Fork is called, two processes with shared huge pages are created.
    In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory.
    This will result in big latency and big memory usage.

    Make sure to disable transparent huge pages using the following command:

    echo never > /sys/kernel/mm/transparent_hugepage/enabled

  
  


# update sea16
ssh xyz@gimel
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
source ../../env.csh
git pull
rm -rf /nfs/soft/www/apps/sea/sea16/lib/python2.7/site-packages/seaserver
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic/SEAserver
python setup.py install

# restart server

ssh www@n-1-110
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
source ../../env.csh
cd SEAsever
sh scripts/run-sea-server.sh

# restart server

ssh <superuser>@n-1-110
sudo -i
screen -r
screen -dR Sea (switch to sea screen)
sh scripts/run-sea-server.sh

# how to save the old queue data
 
cd /nfs/soft/www/apps/sea/sea16/var/seaserver/queue 
mv jobs jobs.save
mv tasks.sqlite tasks.sqllite.save 
restart sea server on n-1-110.  

(basically, it had too much history and that was what was slowing it down)
(do on the first day of the month and rename the old one to a month version)