Search results

Jump to navigation Jump to search
  • There is a separate queue gpu.q to manage jobs To log in interactively to the gpu queue:
    4 KB (550 words) - 17:31, 24 May 2022
  • 6 Beeps - GPU Issue
    437 bytes (70 words) - 00:05, 9 September 2016
  • ...tically split the search into many parallel searches depending on how many GPU are there. '''This needs to take place in a GPU-enabled computer'''
    2 KB (191 words) - 18:47, 16 April 2020
  • ...ror with puppet script?). If so, then you must install them manually (see GPU Issues in 'Troubleshooting installation issues' below). ===GPU Issues===
    10 KB (1,446 words) - 23:37, 19 November 2018
  • qargs: -q gpu.q -pe local %NPROC% -l gpu=1 qargs: -q gpu.q -pe local %NPROC% -l gpu=1
    14 KB (2,127 words) - 06:26, 17 March 2022
  • ==Step 4. Run minimization using the amber program PMEMD.cuda on a GPU machine == #\$ -q gpu.q
    7 KB (1,147 words) - 01:34, 15 December 2018
  • === GPU ===
    8 KB (1,189 words) - 23:45, 22 March 2022
  • Model training requires GPU thus will be on gimel5 #SBATCH --partition=gimel5.gpu
    6 KB (988 words) - 19:12, 12 March 2021
  • ...host group. Then, all nodes now have the proper SELinux permission to run GPU jobs!
    8 KB (1,165 words) - 00:45, 28 April 2022
  • # CUDA for GPU
    5 KB (793 words) - 16:26, 3 May 2017
  • #$ -q gpu.q # on-one-gpu is a used in the Shoichet lab for managing our gpus and may be removed.
    26 KB (4,023 words) - 00:21, 5 March 2019
  • * Replace the gimel-biggpu in the written out simulation job with the correct gpu location, gimel5.heavygpu then transfer (scp) the simulation job to gimel5,
    5 KB (754 words) - 07:16, 9 March 2021
  • Around Spring of 2019, I had a huge amount of issues with Schrodinger GPU jobs failing for apparently little reason. The failures occurred on the co
    5 KB (898 words) - 18:08, 1 July 2019
  • Reminder: If a GPU Node, add nvidia packages in foreman's puppet classes.
    10 KB (1,443 words) - 17:48, 24 May 2022
  • # CUDA for GPU # This script writes a submission script to run amber MD on a GPU cluster.
    49 KB (7,168 words) - 00:18, 9 November 2017