Sea16 restart

From DISI
Revision as of 21:15, 8 June 2017 by Frodo (talk | contribs) (JJI small changes Jun 8 to make it work)
Jump to navigation Jump to search


# restart conda_sea16 server

  screen -r sea16 on gimel
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  source activate sea16 
  cd SEAsever
  sh scripts/stop-sea-server.sh
  sh scripts/run-sea-server.sh


# update conda_sea16

  ssh s_enkhee@gimel
  cd /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/src/seaware-academic/
  export PATH=/nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/bin:$PATH
  export PATH=/nfs/home/momeara/opt/bin:$PATH
  source activate sea16
  git pull
  git submodule update --init --recursive
  make clean
 delete sea related libraries (fitcore, fpcore, libcore, seacore, seashell, seaserver) from your site-packages folder under the conda env
  rm -rf /nfs/soft/www/apps/sea/conda_sea16/anaconda2/envs/sea16/lib/python2.7/site-packages/seaserver 
  make all
  kill all sea-server related processes e.g. from htop
  cd SEAserver
  sh scripts/run-sea-server.sh

# run the test:
     -> export SEA_APP_ROOT=$CONDA_PREFIX/var/seaserver
     -> export SEA_RUN_FOLDER=$SEA_APP_ROOT/run
     -> export SEA_DATA_FOLDER=$SEA_APP_ROOT/data
     ->  python -m unittest test.test_illustrate

 
# Redis config

    As for the Redis warnings, these are not new, but probably something worth taking care of (https://redis.io/topics/admin):

    add 'vm.overcommit_memory = 1' to /etc/sysctl.conf
 
    From here: The Linux kernel will always overcommit memory, and never check if enough memory is available. This increases the risk of out-of-memory situations, but also improves memory-intensive workloads.

    run echo never > /sys/kernel/mm/transparent_hugepage/enabled
    add 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' to /etc/rc.local
  
    From here: Latency induced by transparent huge pages

    Unfortunately when a Linux kernel has transparent huge pages enabled, Redis incurs to a big latency penalty after the fork call is used in order to persist on disk. Huge pages are the cause of the following issue:

    Fork is called, two processes with shared huge pages are created.
    In a busy instance, a few event loops runs will cause commands to target a few thousand of pages, causing the copy on write of almost the whole process memory.
    This will result in big latency and big memory usage.

    Make sure to disable transparent huge pages using the following command:

    echo never > /sys/kernel/mm/transparent_hugepage/enabled

  
  


# update sea16
ssh xyz@gimel
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
source ../../env.csh
git pull
rm -rf /nfs/soft/www/apps/sea/sea16/lib/python2.7/site-packages/seaserver
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic/SEAserver
python setup.py install

# restart server

ssh www@n-1-110
cd /nfs/soft/www/apps/sea/sea16/src/seaware-academic
source ../../env.csh
cd SEAsever
sh scripts/run-sea-server.sh

# restart server

ssh <superuser>@n-1-110
sudo -i
screen -r
screen -dR Sea (switch to sea screen)
sh scripts/run-sea-server.sh

# how to save the old queue data
 
cd /nfs/soft/www/apps/sea/sea16/var/seaserver/queue 
mv jobs jobs.save
mv tasks.sqlite tasks.sqllite.save 
restart sea server on n-1-110.  

(basically, it had too much history and that was what was slowing it down)
(do on the first day of the month and rename the old one to a month version)