<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dawes001</id>
	<title>HPCwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dawes001"/>
	<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php/Special:Contributions/Dawes001"/>
	<updated>2026-04-17T17:38:22Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2166</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2166"/>
		<updated>2022-06-02T13:44:51Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Events */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh]]&lt;br /&gt;
* [[file_transfer | File transfer options]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Data storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* Upcoming courses on 23rd + 30th June!&lt;br /&gt;
&lt;br /&gt;
* Linux Basic - 23rd June&lt;br /&gt;
&lt;br /&gt;
* HPC Basic - 30th June&lt;br /&gt;
&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
* [[Running scripts on a fixed timeschedule (cron)]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Product Owner of Anunna is Alexander van Ittersum (Wageningen UR,FB-IT, C&amp;amp;PS). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, C&amp;amp;PS)]] and [[User:haars001 | Jan van Haarst (Wageningen UR,FB-IT, C&amp;amp;PS)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2089</id>
		<title>Spark</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2089"/>
		<updated>2020-10-05T12:23:46Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* SPARK in Jupyter */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache Spark is a means of distributing compute resources across multiple worker machines. It is the successor to Hadoop, and allows for a wider distribution of code to be executed on the clustered resources. The only requirement for Spark to be able to operate is that each worker must be able to reach each other via TCP, thus it allows for compute to be executed on very simple resources, if the code itself can be translated into the MapReduce paradigm.&lt;br /&gt;
&lt;br /&gt;
== SPARK on HPC ==&lt;br /&gt;
In order to create a personal SPARK cluster, you must first request resources on the HPC. Use this example submission script to initialise your cluster:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
#SBATCH --time=&amp;lt;length&amp;gt;&lt;br /&gt;
#SBATCH --mem-per-cpu=4000&lt;br /&gt;
#SBATCH --nodes=&amp;lt;number of nodes&amp;gt;&lt;br /&gt;
#SBATCH --tasks-per-node=&amp;lt;number of workers per node&amp;gt;&lt;br /&gt;
#SBATCH --job-name=&amp;quot;my spark cluster&amp;quot;&lt;br /&gt;
#SBATCH --qos=QOS&lt;br /&gt;
&lt;br /&gt;
module load spark/3.0.1-2.7&lt;br /&gt;
module load python/3.8.5&lt;br /&gt;
&lt;br /&gt;
source $SPARK_HOME/wur/start-spark&lt;br /&gt;
&lt;br /&gt;
tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will spawn a new cluster of your desired dimensions once resources are available. This spark module has been written to output its logs to your home directory, at:&lt;br /&gt;
&lt;br /&gt;
/home/WUR/yourid/.spark/&amp;lt;jobid&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
In this folder you will find the raw logs of the master and all worker threads. By default the master will consume 1Gb of memory from the first process, and so a single 4Gb &#039;cluster&#039; will be provided with one 3Gb worker. You can adjust the CPU/memory use by adjusting the parameters in your batch script.&lt;br /&gt;
&lt;br /&gt;
Within the log file you will find two unique files: master, and master-console. master will always contain the URI of the current spark cluster master access point, and master-console the URL of the console of it. &lt;br /&gt;
&lt;br /&gt;
To access the web console, the easiest solution is to use links:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;links http://myspark:8081&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will nicely render the page for you in the console. Ctrl-R reloads the page, q to quit.&lt;br /&gt;
&lt;br /&gt;
There are several caveats to remember with this:&lt;br /&gt;
&lt;br /&gt;
* The cluster exists (and consumes resources) until you cancel it with scancel &amp;lt;jobid&amp;gt;&lt;br /&gt;
* There is no security at all - any user of the HPC can access both these at any point if they know the port and host.&lt;br /&gt;
&lt;br /&gt;
== Instant SPARK ==&lt;br /&gt;
&lt;br /&gt;
You can also spin up clusters solely to execute scripts. Simply replace the last line from the example above:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;spark-submit myscript.py&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And after the script has executed, the cluster will automatically terminate.&lt;br /&gt;
&lt;br /&gt;
== SPARK in Jupyter ==&lt;br /&gt;
&lt;br /&gt;
There is a kernel available for using Spark from Jupyter. All this does is to set up the correct path to the python version and the spark binaries for you. In order to set up your Context, your first cell for each notebook should be:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;import os, pyspark&lt;br /&gt;
with open(os.environ[&#039;HOME&#039;]+&#039;/.spark/current/master&#039;) as f:&lt;br /&gt;
    conf = (pyspark.SparkConf()&lt;br /&gt;
         .setMaster(f.read().strip()))&lt;br /&gt;
         .setAppName(&amp;quot;MyName&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
sc = pyspark.SparkContext(conf=conf)&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the cluster master name from the master file in your job output as above. Subsequent cells will then have sc defined. Run this cell only once - attempting to reconnect will throw an error. That application will run until the kernel is terminated and prevent other applications from being able to be executed - you may wish to manually terminate your kernel from the top bar in Jupyter to free resources.&lt;br /&gt;
&lt;br /&gt;
As a teacher, you might want to put that master file somewhere else, such as /lustre/shared, so that students can connect to your cluster.&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2088</id>
		<title>Spark</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2088"/>
		<updated>2020-10-05T12:21:30Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* SPARK on HPC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache Spark is a means of distributing compute resources across multiple worker machines. It is the successor to Hadoop, and allows for a wider distribution of code to be executed on the clustered resources. The only requirement for Spark to be able to operate is that each worker must be able to reach each other via TCP, thus it allows for compute to be executed on very simple resources, if the code itself can be translated into the MapReduce paradigm.&lt;br /&gt;
&lt;br /&gt;
== SPARK on HPC ==&lt;br /&gt;
In order to create a personal SPARK cluster, you must first request resources on the HPC. Use this example submission script to initialise your cluster:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
#SBATCH --time=&amp;lt;length&amp;gt;&lt;br /&gt;
#SBATCH --mem-per-cpu=4000&lt;br /&gt;
#SBATCH --nodes=&amp;lt;number of nodes&amp;gt;&lt;br /&gt;
#SBATCH --tasks-per-node=&amp;lt;number of workers per node&amp;gt;&lt;br /&gt;
#SBATCH --job-name=&amp;quot;my spark cluster&amp;quot;&lt;br /&gt;
#SBATCH --qos=QOS&lt;br /&gt;
&lt;br /&gt;
module load spark/3.0.1-2.7&lt;br /&gt;
module load python/3.8.5&lt;br /&gt;
&lt;br /&gt;
source $SPARK_HOME/wur/start-spark&lt;br /&gt;
&lt;br /&gt;
tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will spawn a new cluster of your desired dimensions once resources are available. This spark module has been written to output its logs to your home directory, at:&lt;br /&gt;
&lt;br /&gt;
/home/WUR/yourid/.spark/&amp;lt;jobid&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
In this folder you will find the raw logs of the master and all worker threads. By default the master will consume 1Gb of memory from the first process, and so a single 4Gb &#039;cluster&#039; will be provided with one 3Gb worker. You can adjust the CPU/memory use by adjusting the parameters in your batch script.&lt;br /&gt;
&lt;br /&gt;
Within the log file you will find two unique files: master, and master-console. master will always contain the URI of the current spark cluster master access point, and master-console the URL of the console of it. &lt;br /&gt;
&lt;br /&gt;
To access the web console, the easiest solution is to use links:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;links http://myspark:8081&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will nicely render the page for you in the console. Ctrl-R reloads the page, q to quit.&lt;br /&gt;
&lt;br /&gt;
There are several caveats to remember with this:&lt;br /&gt;
&lt;br /&gt;
* The cluster exists (and consumes resources) until you cancel it with scancel &amp;lt;jobid&amp;gt;&lt;br /&gt;
* There is no security at all - any user of the HPC can access both these at any point if they know the port and host.&lt;br /&gt;
&lt;br /&gt;
== Instant SPARK ==&lt;br /&gt;
&lt;br /&gt;
You can also spin up clusters solely to execute scripts. Simply replace the last line from the example above:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;spark-submit myscript.py&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And after the script has executed, the cluster will automatically terminate.&lt;br /&gt;
&lt;br /&gt;
== SPARK in Jupyter ==&lt;br /&gt;
&lt;br /&gt;
There is a kernel available for using Spark from Jupyter. All this does (for now) is to set up the correct path to the python version and the spark binaries for you. In order to set up your Context, your first cell for each notebook should be:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;import pyspark&lt;br /&gt;
conf = (pyspark.SparkConf()&lt;br /&gt;
         .setMaster(&amp;quot;spark://mysparkcluster:7077&amp;quot;)&lt;br /&gt;
         .setAppName(&amp;quot;MyName&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
sc = pyspark.SparkContext(conf=conf)&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the cluster master name from the master file in your job output as above. Subsequent cells will then have sc defined. Run this cell only once - attempting to reconnect will throw an error. That application will run until the kernel is terminated and prevent other applications from being able to be executed - you may wish to manually terminate your kernel from the top bar in Jupyter to free resources.&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2077</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2077"/>
		<updated>2020-05-27T09:03:03Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under &amp;lt;code&amp;gt;/cm/shared/apps/SHARED/&amp;lt;/code&amp;gt; as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running &amp;lt;code&amp;gt;/cm/shared/apps/SHARED/my_conda_env/bin/conda&amp;lt;/code&amp;gt;, or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching &amp;lt;code&amp;gt;/cm/shared/modulefiles/SHARED/my_conda_env&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to &amp;lt;code&amp;gt;module load SHARED/my_conda_env&amp;lt;/code&amp;gt; and thus have &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;br /&gt;
&lt;br /&gt;
Kernel definitions are usually a separate folder containing, in particular, a file called &amp;lt;code&amp;gt;kernel.json&amp;lt;/code&amp;gt;, plus an icon that is displayed that represents this kernel, and other helper code.&lt;br /&gt;
&lt;br /&gt;
The setup for this is that you should create the following folder for their access:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir -p /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This folder is something they will need to copy in place to their home directory, specifically &amp;lt;code&amp;gt;$HOME/.local/share/jupyter/kernels/&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Inside this folder, create the following &amp;lt;code&amp;gt;kernel.json&amp;lt;/code&amp;gt; file. Watch out that the paths will need to match the current environment path if you&#039;re using a different location!&lt;br /&gt;
&lt;br /&gt;
=== Python Kernel ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;PATH&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin&amp;quot;&lt;br /&gt;
},&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;my_conda_env&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python, that&#039;s it. Ipykernel is installed automatically on conda initialisation.&lt;br /&gt;
&lt;br /&gt;
=== R Kernel ===&lt;br /&gt;
&lt;br /&gt;
For an R kernel, you need to make sure that the IRkernel package is installed. This is the package that is used to communicate from Jupyter to your running R kernel.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#MISSING EXAMPLE CODE&lt;br /&gt;
#PROBABLY /cm/shared/apps/SHARED/my_conda_env/bin/conda install R_irkernel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also need to create two files: &amp;lt;code&amp;gt;kernel.json&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;kernel.js&amp;lt;/code&amp;gt;. the &amp;lt;code&amp;gt;kernel.js&amp;lt;/code&amp;gt; is a helper script to allow jupyter to communicate to R effectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;LD_LIBRARY_PATH&amp;quot;:&lt;br /&gt;
      &amp;quot;/cm/shared/apps/SHARED/my_conda_env/lib:/cm/shared/apps/SHARED/my_conda_env/lib64&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
  &amp;quot;argv&amp;quot;: [&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/R&amp;quot;, &amp;quot;--slave&amp;quot;, &amp;quot;-e&amp;quot;, &amp;quot;IRkernel::main()&amp;quot;, &amp;quot;--args&amp;quot;, &amp;quot;{connection_file}&amp;quot;],&lt;br /&gt;
  &amp;quot;display_name&amp;quot;: &amp;quot;MAE50806-AdvMolEcol/Sandbox_R&amp;quot;,&lt;br /&gt;
  &amp;quot;language&amp;quot;: &amp;quot;R&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.js&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
const cmd_key = /Mac/.test(navigator.platform) ? &#039;Cmd&#039; : &#039;Ctrl&#039;&lt;br /&gt;
&lt;br /&gt;
const edit_actions = [&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Assign&#039;,&lt;br /&gt;
		shortcut: &#039;Alt--&#039;,&lt;br /&gt;
		icon: &#039;fa-long-arrow-left&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the left-assign operator (&amp;lt;-)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; &amp;lt;- &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Pipe&#039;,&lt;br /&gt;
		shortcut: `Shift-${cmd_key}-M`,&lt;br /&gt;
		icon: &#039;fa-angle-right&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the magrittr pipe operator (%&amp;gt;%)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; %&amp;gt;% &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Help&#039;,&lt;br /&gt;
		shortcut: &#039;F1&#039;,&lt;br /&gt;
		icon: &#039;fa-book&#039;,&lt;br /&gt;
		help: &#039;R: Shows the manpage for the item under the cursor&#039;,&lt;br /&gt;
		handler(cm, cell) {&lt;br /&gt;
			const {anchor, head} = cm.findWordAt(cm.getCursor())&lt;br /&gt;
			const word = cm.getRange(anchor, head)&lt;br /&gt;
			&lt;br /&gt;
			const callbacks = cell.get_callbacks()&lt;br /&gt;
			const options = {silent: false, store_history: false, stop_on_error: true}&lt;br /&gt;
			cell.last_msg_id = cell.notebook.kernel.execute(`help(\`${word}\`)`, callbacks, options)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
const prefix = &#039;irkernel&#039;&lt;br /&gt;
&lt;br /&gt;
function add_edit_shortcut(notebook, actions, keyboard_manager, edit_action) {&lt;br /&gt;
	const {name, shortcut, icon, help, handler} = edit_action&lt;br /&gt;
	&lt;br /&gt;
	const action = {&lt;br /&gt;
		icon, help,&lt;br /&gt;
		help_index : &#039;zz&#039;,&lt;br /&gt;
		handler: () =&amp;gt; {&lt;br /&gt;
			const cell = notebook.get_selected_cell()&lt;br /&gt;
			handler(cell.code_mirror, cell)&lt;br /&gt;
		},&lt;br /&gt;
	}&lt;br /&gt;
	&lt;br /&gt;
	const full_name = actions.register(action, name, prefix)&lt;br /&gt;
	&lt;br /&gt;
	Jupyter.keyboard_manager.edit_shortcuts.add_shortcut(shortcut, full_name)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function render_math(pager, html) {&lt;br /&gt;
	if (!html) return&lt;br /&gt;
	const $container = pager.pager_element.find(&#039;#pager-container&#039;)&lt;br /&gt;
	$container.find(&#039;p[style=&amp;quot;text-align: center;&amp;quot;]&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\[${e.querySelector(&#039;i&#039;).innerHTML}\\]`)&lt;br /&gt;
	$container.find(&#039;i&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\(${e.innerHTML}\\)`)&lt;br /&gt;
	MathJax.Hub.Queue([&#039;Typeset&#039;, MathJax.Hub, $container[0]])&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
define([&#039;base/js/namespace&#039;], ({&lt;br /&gt;
	notebook,&lt;br /&gt;
	actions,&lt;br /&gt;
	keyboard_manager,&lt;br /&gt;
	pager,&lt;br /&gt;
}) =&amp;gt; ({&lt;br /&gt;
	onload() {&lt;br /&gt;
		edit_actions.forEach(a =&amp;gt; add_edit_shortcut(notebook, actions, keyboard_manager, a))&lt;br /&gt;
		&lt;br /&gt;
		pager.events.on(&#039;open_with_text.Pager&#039;, (event, {data: {&#039;text/html&#039;: html}}) =&amp;gt;&lt;br /&gt;
			render_math(pager, html))&lt;br /&gt;
	},&lt;br /&gt;
}))&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Last Steps ==&lt;br /&gt;
&lt;br /&gt;
In order to help your students get their kernel definitions into &amp;lt;code&amp;gt;$HOME/.local/share/jupyter/kernels/&amp;lt;/code&amp;gt;, it&#039;s probably a good idea to write a small and simple notebook to do this when executed, or else instruct them to do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -rv /cm/shared/apps/SHARED/my_conda_env/kernel/* .local/share/jupyter/kernels/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2076</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2076"/>
		<updated>2020-05-27T09:01:02Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under /cm/shared/apps/SHARED/ as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared Folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running /cm/shared/apps/SHARED/my_conda_env/bin/conda , or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching /cm/shared/modulefiles/SHARED/my_conda_env :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to &amp;lt;code&amp;gt;module load SHARED/my_conda_env&amp;lt;/code&amp;gt; and thus have &amp;lt;code&amp;gt;conda&amp;lt;/code&amp;gt; pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;br /&gt;
&lt;br /&gt;
Kernel definitions are usually a separate folder containing, in particular, a file called &amp;lt;code&amp;gt;kernel.json&amp;lt;/code&amp;gt;, plus an icon that is displayed that represents this kernel, and other helper code.&lt;br /&gt;
&lt;br /&gt;
The setup for this is that you should create the following folder for their access:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir -p /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This folder is something they will need to copy in place to their home directory, specifically $HOME/.local/share/jupyter/kernels/&lt;br /&gt;
&lt;br /&gt;
Inside this folder, create the following kernel.json file. Watch out that the paths will need to match the current environment path if you&#039;re using a different location!&lt;br /&gt;
&lt;br /&gt;
=== Python Kernel ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;PATH&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin&amp;quot;&lt;br /&gt;
},&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;my_conda_env&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python, that&#039;s it. Ipykernel is installed automatically on conda initialisation.&lt;br /&gt;
&lt;br /&gt;
=== R Kernel ===&lt;br /&gt;
&lt;br /&gt;
For an R kernel, you need to make sure that the IRkernel package is installed. This is the package that is used to communicate from Jupyter to your running R kernel.&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#MISSING EXAMPLE CODE&lt;br /&gt;
#PROBABLY /cm/shared/apps/SHARED/my_conda_env/bin/conda install R_irkernel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also need to create two files. the kernel.js is a helper script to allow jupyter to communicate to R effectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;LD_LIBRARY_PATH&amp;quot;:&lt;br /&gt;
      &amp;quot;/cm/shared/apps/SHARED/my_conda_env/lib:/cm/shared/apps/SHARED/my_conda_env/lib64&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
  &amp;quot;argv&amp;quot;: [&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/R&amp;quot;, &amp;quot;--slave&amp;quot;, &amp;quot;-e&amp;quot;, &amp;quot;IRkernel::main()&amp;quot;, &amp;quot;--args&amp;quot;, &amp;quot;{connection_file}&amp;quot;],&lt;br /&gt;
  &amp;quot;display_name&amp;quot;: &amp;quot;MAE50806-AdvMolEcol/Sandbox_R&amp;quot;,&lt;br /&gt;
  &amp;quot;language&amp;quot;: &amp;quot;R&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.js&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
const cmd_key = /Mac/.test(navigator.platform) ? &#039;Cmd&#039; : &#039;Ctrl&#039;&lt;br /&gt;
&lt;br /&gt;
const edit_actions = [&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Assign&#039;,&lt;br /&gt;
		shortcut: &#039;Alt--&#039;,&lt;br /&gt;
		icon: &#039;fa-long-arrow-left&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the left-assign operator (&amp;lt;-)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; &amp;lt;- &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Pipe&#039;,&lt;br /&gt;
		shortcut: `Shift-${cmd_key}-M`,&lt;br /&gt;
		icon: &#039;fa-angle-right&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the magrittr pipe operator (%&amp;gt;%)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; %&amp;gt;% &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Help&#039;,&lt;br /&gt;
		shortcut: &#039;F1&#039;,&lt;br /&gt;
		icon: &#039;fa-book&#039;,&lt;br /&gt;
		help: &#039;R: Shows the manpage for the item under the cursor&#039;,&lt;br /&gt;
		handler(cm, cell) {&lt;br /&gt;
			const {anchor, head} = cm.findWordAt(cm.getCursor())&lt;br /&gt;
			const word = cm.getRange(anchor, head)&lt;br /&gt;
			&lt;br /&gt;
			const callbacks = cell.get_callbacks()&lt;br /&gt;
			const options = {silent: false, store_history: false, stop_on_error: true}&lt;br /&gt;
			cell.last_msg_id = cell.notebook.kernel.execute(`help(\`${word}\`)`, callbacks, options)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
const prefix = &#039;irkernel&#039;&lt;br /&gt;
&lt;br /&gt;
function add_edit_shortcut(notebook, actions, keyboard_manager, edit_action) {&lt;br /&gt;
	const {name, shortcut, icon, help, handler} = edit_action&lt;br /&gt;
	&lt;br /&gt;
	const action = {&lt;br /&gt;
		icon, help,&lt;br /&gt;
		help_index : &#039;zz&#039;,&lt;br /&gt;
		handler: () =&amp;gt; {&lt;br /&gt;
			const cell = notebook.get_selected_cell()&lt;br /&gt;
			handler(cell.code_mirror, cell)&lt;br /&gt;
		},&lt;br /&gt;
	}&lt;br /&gt;
	&lt;br /&gt;
	const full_name = actions.register(action, name, prefix)&lt;br /&gt;
	&lt;br /&gt;
	Jupyter.keyboard_manager.edit_shortcuts.add_shortcut(shortcut, full_name)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function render_math(pager, html) {&lt;br /&gt;
	if (!html) return&lt;br /&gt;
	const $container = pager.pager_element.find(&#039;#pager-container&#039;)&lt;br /&gt;
	$container.find(&#039;p[style=&amp;quot;text-align: center;&amp;quot;]&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\[${e.querySelector(&#039;i&#039;).innerHTML}\\]`)&lt;br /&gt;
	$container.find(&#039;i&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\(${e.innerHTML}\\)`)&lt;br /&gt;
	MathJax.Hub.Queue([&#039;Typeset&#039;, MathJax.Hub, $container[0]])&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
define([&#039;base/js/namespace&#039;], ({&lt;br /&gt;
	notebook,&lt;br /&gt;
	actions,&lt;br /&gt;
	keyboard_manager,&lt;br /&gt;
	pager,&lt;br /&gt;
}) =&amp;gt; ({&lt;br /&gt;
	onload() {&lt;br /&gt;
		edit_actions.forEach(a =&amp;gt; add_edit_shortcut(notebook, actions, keyboard_manager, a))&lt;br /&gt;
		&lt;br /&gt;
		pager.events.on(&#039;open_with_text.Pager&#039;, (event, {data: {&#039;text/html&#039;: html}}) =&amp;gt;&lt;br /&gt;
			render_math(pager, html))&lt;br /&gt;
	},&lt;br /&gt;
}))&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Last Steps ==&lt;br /&gt;
&lt;br /&gt;
In order to help your students get their kernel definitions into $HOME/.local/share/jupyter/kernels/, it&#039;s probably a good idea to write a small and simple notebook to do this when executed, or else instruct them to do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -rv /cm/shared/apps/SHARED/my_conda_env/kernel/* .local/share/jupyter/kernels/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2075</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2075"/>
		<updated>2020-05-27T08:59:13Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Jupyter Kernel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under /cm/shared/apps/SHARED/ as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared Folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running /cm/shared/apps/SHARED/my_conda_env/bin/conda , or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching /cm/shared/modulefiles/SHARED/my_conda_env :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to `module load SHARED/my_conda_env` and thus have `conda` pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;br /&gt;
&lt;br /&gt;
Kernel definitions are usually a separate folder containing, in particular, a file called `kernel.json`, plus an icon that is displayed that represents this kernel, and other helper code.&lt;br /&gt;
&lt;br /&gt;
The setup for this is that you should create the following folder for their access:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir -p /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This folder is something they will need to copy in place to their home directory, specifically $HOME/.local/share/jupyter/kernels/&lt;br /&gt;
&lt;br /&gt;
Inside this folder, create the following kernel.json file. Watch out that the paths will need to match the current environment path if you&#039;re using a different location!&lt;br /&gt;
&lt;br /&gt;
=== Python Kernel ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;PATH&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin&amp;quot;&lt;br /&gt;
},&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;my_conda_env&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For Python, that&#039;s it. Ipykernel is installed automatically on conda initialisation.&lt;br /&gt;
&lt;br /&gt;
=== R Kernel ===&lt;br /&gt;
&lt;br /&gt;
For an R kernel, you need to make sure that the IRkernel package is installed. This is the package that is used to communicate from Jupyter to your running R kernel.&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#MISSING EXAMPLE CODE&lt;br /&gt;
#PROBABLY /cm/shared/apps/SHARED/my_conda_env/bin/conda install R_irkernel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also need to create two files. the kernel.js is a helper script to allow jupyter to communicate to R effectively:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;LD_LIBRARY_PATH&amp;quot;:&lt;br /&gt;
      &amp;quot;/cm/shared/apps/SHARED/my_conda_env/lib:/cm/shared/apps/SHARED/my_conda_env/lib64&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
  &amp;quot;argv&amp;quot;: [&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/R&amp;quot;, &amp;quot;--slave&amp;quot;, &amp;quot;-e&amp;quot;, &amp;quot;IRkernel::main()&amp;quot;, &amp;quot;--args&amp;quot;, &amp;quot;{connection_file}&amp;quot;],&lt;br /&gt;
  &amp;quot;display_name&amp;quot;: &amp;quot;MAE50806-AdvMolEcol/Sandbox_R&amp;quot;,&lt;br /&gt;
  &amp;quot;language&amp;quot;: &amp;quot;R&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
vim /cm/shared/apps/SHARED/my_conda_env/kernel/my_conda_env/kernel.js&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
const cmd_key = /Mac/.test(navigator.platform) ? &#039;Cmd&#039; : &#039;Ctrl&#039;&lt;br /&gt;
&lt;br /&gt;
const edit_actions = [&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Assign&#039;,&lt;br /&gt;
		shortcut: &#039;Alt--&#039;,&lt;br /&gt;
		icon: &#039;fa-long-arrow-left&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the left-assign operator (&amp;lt;-)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; &amp;lt;- &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Pipe&#039;,&lt;br /&gt;
		shortcut: `Shift-${cmd_key}-M`,&lt;br /&gt;
		icon: &#039;fa-angle-right&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the magrittr pipe operator (%&amp;gt;%)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; %&amp;gt;% &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Help&#039;,&lt;br /&gt;
		shortcut: &#039;F1&#039;,&lt;br /&gt;
		icon: &#039;fa-book&#039;,&lt;br /&gt;
		help: &#039;R: Shows the manpage for the item under the cursor&#039;,&lt;br /&gt;
		handler(cm, cell) {&lt;br /&gt;
			const {anchor, head} = cm.findWordAt(cm.getCursor())&lt;br /&gt;
			const word = cm.getRange(anchor, head)&lt;br /&gt;
			&lt;br /&gt;
			const callbacks = cell.get_callbacks()&lt;br /&gt;
			const options = {silent: false, store_history: false, stop_on_error: true}&lt;br /&gt;
			cell.last_msg_id = cell.notebook.kernel.execute(`help(\`${word}\`)`, callbacks, options)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
const prefix = &#039;irkernel&#039;&lt;br /&gt;
&lt;br /&gt;
function add_edit_shortcut(notebook, actions, keyboard_manager, edit_action) {&lt;br /&gt;
	const {name, shortcut, icon, help, handler} = edit_action&lt;br /&gt;
	&lt;br /&gt;
	const action = {&lt;br /&gt;
		icon, help,&lt;br /&gt;
		help_index : &#039;zz&#039;,&lt;br /&gt;
		handler: () =&amp;gt; {&lt;br /&gt;
			const cell = notebook.get_selected_cell()&lt;br /&gt;
			handler(cell.code_mirror, cell)&lt;br /&gt;
		},&lt;br /&gt;
	}&lt;br /&gt;
	&lt;br /&gt;
	const full_name = actions.register(action, name, prefix)&lt;br /&gt;
	&lt;br /&gt;
	Jupyter.keyboard_manager.edit_shortcuts.add_shortcut(shortcut, full_name)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function render_math(pager, html) {&lt;br /&gt;
	if (!html) return&lt;br /&gt;
	const $container = pager.pager_element.find(&#039;#pager-container&#039;)&lt;br /&gt;
	$container.find(&#039;p[style=&amp;quot;text-align: center;&amp;quot;]&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\[${e.querySelector(&#039;i&#039;).innerHTML}\\]`)&lt;br /&gt;
	$container.find(&#039;i&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\(${e.innerHTML}\\)`)&lt;br /&gt;
	MathJax.Hub.Queue([&#039;Typeset&#039;, MathJax.Hub, $container[0]])&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
define([&#039;base/js/namespace&#039;], ({&lt;br /&gt;
	notebook,&lt;br /&gt;
	actions,&lt;br /&gt;
	keyboard_manager,&lt;br /&gt;
	pager,&lt;br /&gt;
}) =&amp;gt; ({&lt;br /&gt;
	onload() {&lt;br /&gt;
		edit_actions.forEach(a =&amp;gt; add_edit_shortcut(notebook, actions, keyboard_manager, a))&lt;br /&gt;
		&lt;br /&gt;
		pager.events.on(&#039;open_with_text.Pager&#039;, (event, {data: {&#039;text/html&#039;: html}}) =&amp;gt;&lt;br /&gt;
			render_math(pager, html))&lt;br /&gt;
	},&lt;br /&gt;
}))&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Last Steps ==&lt;br /&gt;
&lt;br /&gt;
In order to help your students get their kernel definitions into $HOME/.local/share/jupyter/kernels/, it&#039;s probably a good idea to write a small and simple notebook to do this when executed, or else instruct them to do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cp -rv /cm/shared/apps/SHARED/my_conda_env/kernel/* .local/share/jupyter/kernels/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2074</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2074"/>
		<updated>2020-05-27T08:45:06Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under /cm/shared/apps/SHARED/ as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared Folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running /cm/shared/apps/SHARED/my_conda_env/bin/conda , or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching /cm/shared/modulefiles/SHARED/my_conda_env :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to `module load SHARED/my_conda_env` and thus have `conda` pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;br /&gt;
&lt;br /&gt;
Kernel definitions are usually a separate folder containing, in particular, a file called `kernel.json`, plus an icon that is displayed that represents this kernel, and other helper code.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is something they will need to copy in place to their home directory, specifically $HOME/.local/share/jupyter/kernels/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;PATH&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin&amp;quot;&lt;br /&gt;
},&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;my_conda_env&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
kernel.js&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
const cmd_key = /Mac/.test(navigator.platform) ? &#039;Cmd&#039; : &#039;Ctrl&#039;&lt;br /&gt;
&lt;br /&gt;
const edit_actions = [&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Assign&#039;,&lt;br /&gt;
		shortcut: &#039;Alt--&#039;,&lt;br /&gt;
		icon: &#039;fa-long-arrow-left&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the left-assign operator (&amp;lt;-)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; &amp;lt;- &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Pipe&#039;,&lt;br /&gt;
		shortcut: `Shift-${cmd_key}-M`,&lt;br /&gt;
		icon: &#039;fa-angle-right&#039;,&lt;br /&gt;
		help: &#039;R: Inserts the magrittr pipe operator (%&amp;gt;%)&#039;,&lt;br /&gt;
		handler(cm) {&lt;br /&gt;
			cm.replaceSelection(&#039; %&amp;gt;% &#039;)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
	{&lt;br /&gt;
		name: &#039;R Help&#039;,&lt;br /&gt;
		shortcut: &#039;F1&#039;,&lt;br /&gt;
		icon: &#039;fa-book&#039;,&lt;br /&gt;
		help: &#039;R: Shows the manpage for the item under the cursor&#039;,&lt;br /&gt;
		handler(cm, cell) {&lt;br /&gt;
			const {anchor, head} = cm.findWordAt(cm.getCursor())&lt;br /&gt;
			const word = cm.getRange(anchor, head)&lt;br /&gt;
			&lt;br /&gt;
			const callbacks = cell.get_callbacks()&lt;br /&gt;
			const options = {silent: false, store_history: false, stop_on_error: true}&lt;br /&gt;
			cell.last_msg_id = cell.notebook.kernel.execute(`help(\`${word}\`)`, callbacks, options)&lt;br /&gt;
		},&lt;br /&gt;
	},&lt;br /&gt;
]&lt;br /&gt;
&lt;br /&gt;
const prefix = &#039;irkernel&#039;&lt;br /&gt;
&lt;br /&gt;
function add_edit_shortcut(notebook, actions, keyboard_manager, edit_action) {&lt;br /&gt;
	const {name, shortcut, icon, help, handler} = edit_action&lt;br /&gt;
	&lt;br /&gt;
	const action = {&lt;br /&gt;
		icon, help,&lt;br /&gt;
		help_index : &#039;zz&#039;,&lt;br /&gt;
		handler: () =&amp;gt; {&lt;br /&gt;
			const cell = notebook.get_selected_cell()&lt;br /&gt;
			handler(cell.code_mirror, cell)&lt;br /&gt;
		},&lt;br /&gt;
	}&lt;br /&gt;
	&lt;br /&gt;
	const full_name = actions.register(action, name, prefix)&lt;br /&gt;
	&lt;br /&gt;
	Jupyter.keyboard_manager.edit_shortcuts.add_shortcut(shortcut, full_name)&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
function render_math(pager, html) {&lt;br /&gt;
	if (!html) return&lt;br /&gt;
	const $container = pager.pager_element.find(&#039;#pager-container&#039;)&lt;br /&gt;
	$container.find(&#039;p[style=&amp;quot;text-align: center;&amp;quot;]&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\[${e.querySelector(&#039;i&#039;).innerHTML}\\]`)&lt;br /&gt;
	$container.find(&#039;i&#039;).map((i, e) =&amp;gt;&lt;br /&gt;
		e.outerHTML = `\\(${e.innerHTML}\\)`)&lt;br /&gt;
	MathJax.Hub.Queue([&#039;Typeset&#039;, MathJax.Hub, $container[0]])&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
define([&#039;base/js/namespace&#039;], ({&lt;br /&gt;
	notebook,&lt;br /&gt;
	actions,&lt;br /&gt;
	keyboard_manager,&lt;br /&gt;
	pager,&lt;br /&gt;
}) =&amp;gt; ({&lt;br /&gt;
	onload() {&lt;br /&gt;
		edit_actions.forEach(a =&amp;gt; add_edit_shortcut(notebook, actions, keyboard_manager, a))&lt;br /&gt;
		&lt;br /&gt;
		pager.events.on(&#039;open_with_text.Pager&#039;, (event, {data: {&#039;text/html&#039;: html}}) =&amp;gt;&lt;br /&gt;
			render_math(pager, html))&lt;br /&gt;
	},&lt;br /&gt;
}))&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2073</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2073"/>
		<updated>2020-05-27T08:40:47Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Jupyter Kernel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under /cm/shared/apps/SHARED/ as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared Folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running /cm/shared/apps/SHARED/my_conda_env/bin/conda , or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching /cm/shared/modulefiles/SHARED/my_conda_env :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to `module load SHARED/my_conda_env` and thus have `conda` pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;PATH&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin&amp;quot;&lt;br /&gt;
},&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/cm/shared/apps/SHARED/my_conda_env/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;my_conda_env&amp;quot;&lt;br /&gt;
}&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2072</id>
		<title>Conda for teaching</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Conda_for_teaching&amp;diff=2072"/>
		<updated>2020-05-27T08:38:31Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: Created page with &amp;quot;You are going to give a teaching course, and you need a specific code environment.  == Setup == First - find a good location that everyone can read (and not write). I&amp;#039;d sugges...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;You are going to give a teaching course, and you need a specific code environment.&lt;br /&gt;
&lt;br /&gt;
== Setup ==&lt;br /&gt;
First - find a good location that everyone can read (and not write). I&#039;d suggest somewhere under /cm/shared/apps/SHARED/ as a starting point - this allows everyone to access this location. It&#039;s important not to put anything secret there - it&#039;s a public resource, so please bear that in mind.&lt;br /&gt;
&lt;br /&gt;
Next - create a folder for your environment:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
chmod +r /cm/shared/apps/SHARED/my_conda_env&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You may want to manipulate the permissions for this folder if someone is going to set this up with you. Consider the commands in [[Shared Folders]].&lt;br /&gt;
&lt;br /&gt;
Then, install Anaconda into it:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
wget https://repo.anaconda.com/archive/Anaconda3-YEAR.MONTH-Linux-x86_64.sh&lt;br /&gt;
./Anaconda3-YEAR.MONTH-Linux-x86_64.sh -s -b -p /cm/shared/apps/SHARED/my_conda_env &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you have a working conda environment in this folder. You can manipulate this here by running /cm/shared/apps/SHARED/my_conda_env/bin/conda , or, I would recommend creating a modulefile so that you can use it as default.&lt;br /&gt;
&lt;br /&gt;
Create the following example modulefile in a matching /cm/shared/modulefiles/SHARED/my_conda_env :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#%Module -*- tcl -*-&lt;br /&gt;
##&lt;br /&gt;
## conda environment modulefile&lt;br /&gt;
##&lt;br /&gt;
&lt;br /&gt;
set                     loadedmodules           [split $::env(LOADEDMODULES) &amp;quot;:&amp;quot;]&lt;br /&gt;
set                     modulepath              [split $ModulesCurrentModulefile &amp;quot;/&amp;quot;]&lt;br /&gt;
set                     envpath                 [lrange $modulepath 4 end]&lt;br /&gt;
&lt;br /&gt;
set                     root                    /cm/shared/apps/[join $envpath &amp;quot;/&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
proc ModulesHelp { } {&lt;br /&gt;
        global version&lt;br /&gt;
&lt;br /&gt;
        puts stderr &amp;quot;\tThis module provides the conda environment at $envpath&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
if { [module-info mode] != &amp;quot;whatis&amp;quot; } {&lt;br /&gt;
        puts stderr &amp;quot;[module-info mode] environent $envpath .&amp;quot;&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
module-whatis   &amp;quot;Provides environment $envpath&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
prepend-path            PATH                    $root/bin&lt;br /&gt;
prepend-path            LD_LIBRARY_PATH         $root/lib&lt;br /&gt;
prepend-path            LIBRARY_PATH            $root/lib&lt;br /&gt;
prepend-path            CPATH                   $root/include&lt;br /&gt;
prepend-path            MANPATH                 $root/share/man&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will allow you to `module load SHARED/my_conda_env` and thus have `conda` pathed to the currently active environment.&lt;br /&gt;
&lt;br /&gt;
== Jupyter Kernel ==&lt;br /&gt;
&lt;br /&gt;
In order for students to be able to use this environment in jupyter, they will need a kernel definition.&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2068</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2068"/>
		<updated>2020-02-17T08:46:03Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Scratch&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 175&lt;br /&gt;
|€ 125&lt;br /&gt;
|€ 125&lt;br /&gt;
|€ 175&lt;br /&gt;
|€ 125&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Reservations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per node per day (node/day)&lt;br /&gt;
|-&lt;br /&gt;
|€ 30&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Notes==&lt;br /&gt;
&lt;br /&gt;
If you are a member of a group with a commitment, then these costs get deducted from that commitment. Typically we are fairly lax with enforcing limits - only once you get to around 150% of your commitment will we consider taking action (mainly coming to discuss things).&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
You are running a job that needs 4 cores, 32G of RAM and runs for 90 minutes in the std quality. To run this, you over-request resources slightly, and execute in a job that requests 4 CPUs, 40G of RAM and with a time limit of 3 hours. Your job terminates early. Thus, your costs are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4 * 0.015 * 1.5 = 0.09 EUR for the CPU&lt;br /&gt;
&lt;br /&gt;
40 * 0.0015 * 1.5 = 0.09 EUR for the memory&lt;br /&gt;
&lt;br /&gt;
Total: 0.18 EUR&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2067</id>
		<title>Setting up Python virtualenv</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2067"/>
		<updated>2019-12-11T15:32:51Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Virtualenv kernels in Jupyter */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
NOTE: as of Python 3.3 virtual environment support is built-in. See this page for an [[virtual_environment_Python_3.4_or_higher | alternative set-up of your virtual environment if using Python 3.4 or higher]].&lt;br /&gt;
&lt;br /&gt;
== Creating a new virtual environment ==&lt;br /&gt;
It is assumed that the appropriate &amp;lt;code&amp;gt;virtualenv&amp;lt;/code&amp;gt; executable for the Python version of choice is installed. A new virtual environment, in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt; is created like so:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/my-favourite-version (e.g. 2.7.12)&lt;br /&gt;
virtualenv newenv&lt;br /&gt;
OR&lt;br /&gt;
pyvenv newenv (For versions &amp;gt;3.4)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the new environment is created, one will see a message similar to this:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  New python executable in newenv/bin/python3&lt;br /&gt;
  Also creating executable in newenv/bin/python&lt;br /&gt;
  Installing Setuptools.........................................................................done.&lt;br /&gt;
  Installing Pip................................................................................done.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source newenv/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  (newenv)user@host:~$&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Virtualenv kernels in Jupyter ==&lt;br /&gt;
Want your own virtualenv kernel in a notebook? This can be done by making your own kernel specifications:&lt;br /&gt;
&lt;br /&gt;
* Make sure you have the ipykernel module in your venv. Activate it and pip install it:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;source ~/path/to/my/virtualenv/bin/activate &amp;amp;&amp;amp; pip install ipykernel&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Create the following directory path in your homedir if it doesn&#039;t already exist:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir -p ~/.local/share/jupyter/kernels/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Think of a nice descriptive name that doesn&#039;t clash with one of the already present kernels. I&#039;ll use &#039;testing&#039;. Create this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir ~/.local/share/jupyter/kernels/testing/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Add this file to this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;vi ~/.local/share/jupyter/kernels/testing/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/home/myhome/path/to/my/virtualenv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;testing&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Reload Jupyterhub page. testing should now exist in your kernels list.&lt;br /&gt;
&lt;br /&gt;
You can do more complex things with this, such as construct your own Spark environment. This relies on having the module findspark installed:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt; vi ~/.local/share/jupyter/kernels/mysparkkernel/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;SPARK_HOME&amp;quot;:&lt;br /&gt;
     &amp;quot;/cm/shared/apps/spark/my-spark-version&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/home/myhome/my/spark/venv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-c&amp;quot;, &amp;quot;import findspark; findspark.init()&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;My Spark kernel&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
(You&#039;ll want to make sure your spark cluster has the same environment - start it after activating this venv inside your sbatch script)&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2066</id>
		<title>Setting up Python virtualenv</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2066"/>
		<updated>2019-12-11T15:32:23Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Virtualenv kernels in Jupyter */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
NOTE: as of Python 3.3 virtual environment support is built-in. See this page for an [[virtual_environment_Python_3.4_or_higher | alternative set-up of your virtual environment if using Python 3.4 or higher]].&lt;br /&gt;
&lt;br /&gt;
== Creating a new virtual environment ==&lt;br /&gt;
It is assumed that the appropriate &amp;lt;code&amp;gt;virtualenv&amp;lt;/code&amp;gt; executable for the Python version of choice is installed. A new virtual environment, in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt; is created like so:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/my-favourite-version (e.g. 2.7.12)&lt;br /&gt;
virtualenv newenv&lt;br /&gt;
OR&lt;br /&gt;
pyvenv newenv (For versions &amp;gt;3.4)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the new environment is created, one will see a message similar to this:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  New python executable in newenv/bin/python3&lt;br /&gt;
  Also creating executable in newenv/bin/python&lt;br /&gt;
  Installing Setuptools.........................................................................done.&lt;br /&gt;
  Installing Pip................................................................................done.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source newenv/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  (newenv)user@host:~$&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Virtualenv kernels in Jupyter ==&lt;br /&gt;
Want your own virtualenv kernel in a notebook? This can be done by making your own kernel specifications:&lt;br /&gt;
&lt;br /&gt;
* Make sure you have the ipykernel module in your venv. Activate it and pip install it:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;source ~/path/to/my/virtualenv/bin/activate &amp;amp;&amp;amp; pip install ipykernel&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Create the following directory path in your homedir if it doesn&#039;t already exist:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir -p ~/.local/share/jupyter/kernels/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Think of a nice descriptive name that doesn&#039;t clash with one of the already present kernels. I&#039;ll use &#039;testing&#039;. Create this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir ~/.local/share/jupyter/kernels/testing/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Add this file to this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;vi ~/.local/share/jupyter/kernels/testing/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;~/path/to/my/virtualenv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;testing&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Reload Jupyterhub page. testing should now exist in your kernels list.&lt;br /&gt;
&lt;br /&gt;
You can do more complex things with this, such as construct your own Spark environment. This relies on having the module findspark installed:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt; vi ~/.local/share/jupyter/kernels/mysparkkernel/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;SPARK_HOME&amp;quot;:&lt;br /&gt;
     &amp;quot;/cm/shared/apps/spark/my-spark-version&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/home/myhome/my/spark/venv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-c&amp;quot;, &amp;quot;import findspark; findspark.init()&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;My Spark kernel&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
(You&#039;ll want to make sure your spark cluster has the same environment - start it after activating this venv inside your sbatch script)&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2058</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2058"/>
		<updated>2019-10-17T15:19:26Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Batch script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:&amp;lt;num gpus&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;lt;gpu flavour e.g. K80, V100&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2056</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2056"/>
		<updated>2019-10-02T08:00:14Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Using GPU */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:&amp;lt;num gpus&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;lt;gpu flavour e.g. K80, V100&amp;gt;&lt;br /&gt;
#SBATCH --partition=gpu&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2055</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2055"/>
		<updated>2019-10-01T08:08:46Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Using GPU */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:&amp;lt;num gpus&amp;gt;&lt;br /&gt;
#SBATCH --constraint=&amp;lt;gpu flavour e.g. K80, V100&amp;gt;&lt;br /&gt;
#SBATCH --partition=GPU&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2054</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2054"/>
		<updated>2019-10-01T08:07:53Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Using GPU */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:&amp;lt;num gpus&amp;gt;&lt;br /&gt;
#SBATCH --partition=GPU&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2053</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2053"/>
		<updated>2019-10-01T08:06:55Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Using GPU */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=GPU&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2052</id>
		<title>Ssh without password</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2052"/>
		<updated>2019-07-26T16:29:57Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Step 1: create a public key and copy to remote computer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure shell (ssh) protocols can be configure to work without protocols. This is particularly helpful for machines that are used often. &lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password from a POSIX-compliant terminal ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: create a public key and copy to remote computer ===&lt;br /&gt;
* Log into a local Linux or MacOSX computer&lt;br /&gt;
* Type the following to generate the ssh key:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-keygen&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Accept the default key location by pressing &amp;lt;code&amp;gt;Enter&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Secure permission of your authentication keys by closing permission to your home directory, .ssh directory, and authentication files&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod go-wx $HOME&lt;br /&gt;
chmod 700 $HOME/.ssh&lt;br /&gt;
chmod 600 $HOME/.ssh/*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Type the following to copy the key to the remote server (this will prompt for a password).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-copy-id remote_username@remote_host&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password for Anunna ==&lt;br /&gt;
&lt;br /&gt;
* Create a public key as in Step 1 of the previous section and copy it to Anunna. Note that a public/private key pair needs to be made only once per machine.&lt;br /&gt;
* Similar to step 2 of the previous section, add the public key to the &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys2&amp;lt;/code&amp;gt; file. There is already a &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys&amp;lt;/code&amp;gt; present. You may append the key to this file as an alternative, but take care not to remove content that is already there. The cluster is configured so that passwordless communication will all other nodes is default.&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password using PuTTY ==&lt;br /&gt;
Use pAGEaNT: http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html to generate local keys. You&#039;ll want have a copy of the pubkey in plaintext available.&lt;br /&gt;
&lt;br /&gt;
Make sure to paste that plaintext string into ~/.ssh/authorized_keys in one single line. Chmod the file 600 (so it shows -rw------- in ls -l) and the directory .ssh to 700 (drwx------).&lt;br /&gt;
&lt;br /&gt;
Now PuTTY will login passwordlessly whenever pAGEaNT is running.&lt;br /&gt;
&lt;br /&gt;
Finally, get pAGEaNT to load on startup: http://blog.shvetsov.com/2010/03/making-pageant-automatically-load-keys.html&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[log_in_to_Anunna | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2051</id>
		<title>Ssh without password</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2051"/>
		<updated>2019-07-26T16:29:12Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* Step 1: create a public key and copy to remote computer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure shell (ssh) protocols can be configure to work without protocols. This is particularly helpful for machines that are used often. &lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password from a POSIX-compliant terminal ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: create a public key and copy to remote computer ===&lt;br /&gt;
* Log into a local Linux or MacOSX computer&lt;br /&gt;
* Type the following to generate the ssh key:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-keygen&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Accept the default key location by pressing &amp;lt;code&amp;gt;Enter&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Secure permission of your authentication keys by closing permission to your home directory, .ssh directory, and authentication files&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod go-w $HOME&lt;br /&gt;
chmod 700 $HOME/.ssh&lt;br /&gt;
chmod go-rwx $HOME/.ssh/*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Type the following to copy the key to the remote server (this will prompt for a password).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-copy-id remote_username@remote_host&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password for Anunna ==&lt;br /&gt;
&lt;br /&gt;
* Create a public key as in Step 1 of the previous section and copy it to Anunna. Note that a public/private key pair needs to be made only once per machine.&lt;br /&gt;
* Similar to step 2 of the previous section, add the public key to the &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys2&amp;lt;/code&amp;gt; file. There is already a &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys&amp;lt;/code&amp;gt; present. You may append the key to this file as an alternative, but take care not to remove content that is already there. The cluster is configured so that passwordless communication will all other nodes is default.&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password using PuTTY ==&lt;br /&gt;
Use pAGEaNT: http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html to generate local keys. You&#039;ll want have a copy of the pubkey in plaintext available.&lt;br /&gt;
&lt;br /&gt;
Make sure to paste that plaintext string into ~/.ssh/authorized_keys in one single line. Chmod the file 600 (so it shows -rw------- in ls -l) and the directory .ssh to 700 (drwx------).&lt;br /&gt;
&lt;br /&gt;
Now PuTTY will login passwordlessly whenever pAGEaNT is running.&lt;br /&gt;
&lt;br /&gt;
Finally, get pAGEaNT to load on startup: http://blog.shvetsov.com/2010/03/making-pageant-automatically-load-keys.html&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[log_in_to_Anunna | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2050</id>
		<title>Ssh without password</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2050"/>
		<updated>2019-07-26T16:28:37Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure shell (ssh) protocols can be configure to work without protocols. This is particularly helpful for machines that are used often. &lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password from a POSIX-compliant terminal ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: create a public key and copy to remote computer ===&lt;br /&gt;
* Log into a local Linux or MacOSX computer&lt;br /&gt;
* Type the following to generate the ssh key:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-keygen&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Accept the default key location by pressing &amp;lt;code&amp;gt;Enter&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Secure permission of your authentication keys by closing permission to your home directory, .ssh directory, and authentication files&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod go-w $HOME&lt;br /&gt;
chmod 700 $HOME/.ssh&lt;br /&gt;
chmod go-rwx $HOME/.ssh/*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Type the following to copy the key to the remote server (this will prompt for a password).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd ~/.ssh&lt;br /&gt;
ssh-copy-id remote_username@remote_host&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password for Anunna ==&lt;br /&gt;
&lt;br /&gt;
* Create a public key as in Step 1 of the previous section and copy it to Anunna. Note that a public/private key pair needs to be made only once per machine.&lt;br /&gt;
* Similar to step 2 of the previous section, add the public key to the &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys2&amp;lt;/code&amp;gt; file. There is already a &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys&amp;lt;/code&amp;gt; present. You may append the key to this file as an alternative, but take care not to remove content that is already there. The cluster is configured so that passwordless communication will all other nodes is default.&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password using PuTTY ==&lt;br /&gt;
Use pAGEaNT: http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html to generate local keys. You&#039;ll want have a copy of the pubkey in plaintext available.&lt;br /&gt;
&lt;br /&gt;
Make sure to paste that plaintext string into ~/.ssh/authorized_keys in one single line. Chmod the file 600 (so it shows -rw------- in ls -l) and the directory .ssh to 700 (drwx------).&lt;br /&gt;
&lt;br /&gt;
Now PuTTY will login passwordlessly whenever pAGEaNT is running.&lt;br /&gt;
&lt;br /&gt;
Finally, get pAGEaNT to load on startup: http://blog.shvetsov.com/2010/03/making-pageant-automatically-load-keys.html&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[log_in_to_Anunna | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=2049</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=2049"/>
		<updated>2019-07-15T15:05:38Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
#SBATCH --qos=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The comment is an arbitrary string. The comment may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting by feature===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=4gpercpu&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The HPC nodes have features associated with them, such as Intel CPU&#039;s, or the amount of memory per node. If you know that your job requires a specific architecture or memory size, you can elect to constrain your job to only these features.&lt;br /&gt;
&lt;br /&gt;
The example above will result in jobs being scheduled to the compute nodes with 4GB of memory per CPU. By using &amp;lt;code&amp;gt;12gpercpu&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the larger nodes with 12GB per CPU. &lt;br /&gt;
&lt;br /&gt;
All features can be seen using:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scontrol show nodes | grep ActiveFeatures | sort | uniq&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===requesting specific resources===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In order to be able to use specific hardware resources, you need to request a Generic Resource. Once you do this, one of the resources will be allocated to your job when they are available. In the above example, one GPU is requested for use.&lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=yourname001@wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Anunna | Anunna]]&lt;br /&gt;
* [[Using_Slurm#Batch_script | Submitting jobs to Slurm]]&lt;br /&gt;
* [[Array_jobs|Array job hints]]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2048</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2048"/>
		<updated>2019-07-15T15:04:20Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 150&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Reservations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per node per day (node/day)&lt;br /&gt;
|-&lt;br /&gt;
|€ 30&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Notes==&lt;br /&gt;
&lt;br /&gt;
If you are a member of a group with a commitment, then these costs get deducted from that commitment. Typically we are fairly lax with enforcing limits - only once you get to around 150% of your commitment will we consider taking action (mainly coming to discuss things).&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
You are running a job that needs 4 cores, 32G of RAM and runs for 90 minutes in the std quality. To run this, you over-request resources slightly, and execute in a job that requests 4 CPUs, 40G of RAM and with a time limit of 3 hours. Your job terminates early. Thus, your costs are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4 * 0.015 * 1.5 = 0.09 EUR for the CPU&lt;br /&gt;
&lt;br /&gt;
40 * 0.0015 * 1.5 = 0.09 EUR for the memory&lt;br /&gt;
&lt;br /&gt;
Total: 0.18 EUR&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2047</id>
		<title>Spark</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Spark&amp;diff=2047"/>
		<updated>2019-07-15T15:03:35Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* SPARK on HPC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apache Spark is a means of distributing compute resources across multiple worker machines. It is the successor to Hadoop, and allows for a wider distribution of code to be executed on the clustered resources. The only requirement for Spark to be able to operate is that each worker must be able to reach each other via TCP, thus it allows for compute to be executed on very simple resources, if the code itself can be translated into the MapReduce paradigm.&lt;br /&gt;
&lt;br /&gt;
== SPARK on HPC ==&lt;br /&gt;
In order to create a personal SPARK cluster, you must first request resources on the HPC. Use this example submission script to initialise your cluster:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#!/bin/bash&lt;br /&gt;
#SBATCH --time=&amp;lt;length&amp;gt;&lt;br /&gt;
#SBATCH --mem-per-cpu=4000&lt;br /&gt;
#SBATCH --nodes=&amp;lt;number of nodes&amp;gt;&lt;br /&gt;
#SBATCH --tasks-per-node=&amp;lt;number of workers per node&amp;gt;&lt;br /&gt;
#SBATCH --job-name=&amp;quot;my spark cluster&amp;quot;&lt;br /&gt;
#SBATCH --qos=QOS&lt;br /&gt;
&lt;br /&gt;
module load python&lt;br /&gt;
module load spark&lt;br /&gt;
&lt;br /&gt;
source $SPARK_HOME/wur/start-spark&lt;br /&gt;
&lt;br /&gt;
tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will spawn a new cluster of your desired dimensions once resources are available. This spark module has been written to output its logs to your home directory, at:&lt;br /&gt;
&lt;br /&gt;
/home/WUR/yourid/.spark/&amp;lt;jobid&amp;gt;/&lt;br /&gt;
&lt;br /&gt;
In this folder you will find the raw logs of the master and all worker threads. By default the master will consume 1Gb of memory from the first process, and so a single 4Gb &#039;cluster&#039; will be provided with one 3Gb worker. You can adjust the CPU/memory use by adjusting the parameters in your batch script.&lt;br /&gt;
&lt;br /&gt;
Within the log file you will find two unique files: master, and master-console. master will always contain the URI of the current spark cluster master access point, and master-console the URL of the console of it. &lt;br /&gt;
&lt;br /&gt;
To access the web console, the easiest solution is to use links:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;links http://myspark:8081&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will nicely render the page for you in the console. Ctrl-R reloads the page, q to quit.&lt;br /&gt;
&lt;br /&gt;
There are several caveats to remember with this:&lt;br /&gt;
&lt;br /&gt;
* The cluster exists (and consumes resources) until you cancel it with scancel &amp;lt;jobid&amp;gt;&lt;br /&gt;
* There is no security at all - any user of the HPC can access both these at any point if they know the port and host.&lt;br /&gt;
&lt;br /&gt;
== Instant SPARK ==&lt;br /&gt;
&lt;br /&gt;
You can also spin up clusters solely to execute scripts. Simply replace the last line from the example above:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;tail -f /dev/null&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;spark-submit myscript.py&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
And after the script has executed, the cluster will automatically terminate.&lt;br /&gt;
&lt;br /&gt;
== SPARK in Jupyter ==&lt;br /&gt;
&lt;br /&gt;
There is a kernel available for using Spark from Jupyter. All this does (for now) is to set up the correct path to the python version and the spark binaries for you. In order to set up your Context, your first cell for each notebook should be:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;import pyspark&lt;br /&gt;
conf = (pyspark.SparkConf()&lt;br /&gt;
         .setMaster(&amp;quot;spark://mysparkcluster:7077&amp;quot;)&lt;br /&gt;
         .setAppName(&amp;quot;MyName&amp;quot;))&lt;br /&gt;
&lt;br /&gt;
sc = pyspark.SparkContext(conf=conf)&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the cluster master name from the master file in your job output as above. Subsequent cells will then have sc defined. Run this cell only once - attempting to reconnect will throw an error. That application will run until the kernel is terminated and prevent other applications from being able to be executed - you may wish to manually terminate your kernel from the top bar in Jupyter to free resources.&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=MPI_on_B4F_cluster&amp;diff=2046</id>
		<title>MPI on B4F cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=MPI_on_B4F_cluster&amp;diff=2046"/>
		<updated>2019-07-15T15:02:51Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== A simple &#039;Hello World&#039; example ==&lt;br /&gt;
Consider the following simple MPI version, in C, of the &#039;Hello World&#039; example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;cpp&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
int main(int argc, char ** argv) {&lt;br /&gt;
  int size,rank,namelen;&lt;br /&gt;
  char processor_name[MPI_MAX_PROCESSOR_NAME];&lt;br /&gt;
  MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
  MPI_Comm_rank(MPI_COMM_WORLD,&amp;amp;rank);&lt;br /&gt;
  MPI_Comm_size(MPI_COMM_WORLD,&amp;amp;size);&lt;br /&gt;
  MPI_Get_processor_name(processor_name, &amp;amp;namelen);&lt;br /&gt;
  printf(&amp;quot;Hello MPI! Process %d of %d on %s\n&amp;quot;, rank, size, processor_name);&lt;br /&gt;
  MPI_Finalize();&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before compiling, make sure that the compilers that are required available.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module list&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To avoid conflicts between libraries, the safest way is purging all modules:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The load both gcc and openmpi libraries. If modules were purged, then slurm needs to be reloaded too.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load gcc/4.8.1 openmpi/gcc/64/1.6.5 slurm/2.5.7&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compile the &amp;lt;code&amp;gt;hello_mpi.c&amp;lt;/code&amp;gt; code.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mpicc hello_mpi.c -o test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If desired, a list of libraries compiled into the executable can be viewed:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ldd test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  linux-vdso.so.1 =&amp;gt;  (0x00002aaaaaacb000)&lt;br /&gt;
  libmpi.so.1 =&amp;gt; /cm/shared/apps/openmpi/gcc/64/1.6.5/lib64/libmpi.so.1 (0x00002aaaaaccd000)&lt;br /&gt;
  libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002aaaab080000)&lt;br /&gt;
  libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x00002aaaab284000)&lt;br /&gt;
  libnuma.so.1 =&amp;gt; /usr/lib64/libnuma.so.1 (0x0000003e29400000)&lt;br /&gt;
  librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002aaaab509000)&lt;br /&gt;
  libnsl.so.1 =&amp;gt; /lib64/libnsl.so.1 (0x00002aaaab711000)&lt;br /&gt;
  libutil.so.1 =&amp;gt; /lib64/libutil.so.1 (0x00002aaaab92a000)&lt;br /&gt;
  libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002aaaabb2e000)&lt;br /&gt;
  libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002aaaabd4b000)&lt;br /&gt;
  /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)&lt;br /&gt;
&lt;br /&gt;
Running the executable on two nodes, with four tasks per node, can be done like this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun --nodes=2 --ntasks-per-node=4 --mpi=openmpi ./test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will result in the following output:&lt;br /&gt;
  Hello MPI! Process 4 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 1 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 7 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 6 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 5 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 2 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 0 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 3 of 8 on node010&lt;br /&gt;
&lt;br /&gt;
== A mvapich2 sbatch example ==&lt;br /&gt;
A mpi job using mvapich2 on 32 cores, using the normal compute nodes and the fast infiniband interconnect for RDMA traffic.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ module load mvapich2/gcc&lt;br /&gt;
$ vim batch.sh&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #SBATCH --comment=projectx&lt;br /&gt;
 #SBATCH --time=30-0&lt;br /&gt;
 #SBATCH  -n 32&lt;br /&gt;
 #SBATCH --constraint=4gpercpu&lt;br /&gt;
 #SBATCH --output=output_%j.txt&lt;br /&gt;
 #SBATCH --error=error_output_%j.txt&lt;br /&gt;
 #SBATCH --job-name=MPItest&lt;br /&gt;
 #SBATCH --mail-type=ALL&lt;br /&gt;
 #SBATCH --mail-user=user@wur.nl&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;Starting at `date`&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on hosts: $SLURM_NODELIST&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on $SLURM_NNODES nodes.&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on $SLURM_NPROCS processors.&amp;quot;&lt;br /&gt;
 echo &amp;quot;Current working directory is `pwd`&amp;quot;&lt;br /&gt;
 # echo &amp;quot;Env var MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE is $MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE&amp;quot;&lt;br /&gt;
 # export MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE=ib0&lt;br /&gt;
&lt;br /&gt;
 mpirun -iface ib0 -np 32 ./tmf_par.out -NX 480 -NY 240 -alpha  11 -chi 1.3 -psi_b 5e-2  -beta  0.0 -zeta 3.5 -kT 0.10 &lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;Program finished with exit code $? at: `date`&amp;quot;&lt;br /&gt;
&lt;br /&gt;
$ sbatch batch.sh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=MPI_on_B4F_cluster&amp;diff=2045</id>
		<title>MPI on B4F cluster</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=MPI_on_B4F_cluster&amp;diff=2045"/>
		<updated>2019-07-15T15:02:32Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== A simple &#039;Hello World&#039; example ==&lt;br /&gt;
Consider the following simple MPI version, in C, of the &#039;Hello World&#039; example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;cpp&#039;&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;mpi.h&amp;gt;&lt;br /&gt;
int main(int argc, char ** argv) {&lt;br /&gt;
  int size,rank,namelen;&lt;br /&gt;
  char processor_name[MPI_MAX_PROCESSOR_NAME];&lt;br /&gt;
  MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;br /&gt;
  MPI_Comm_rank(MPI_COMM_WORLD,&amp;amp;rank);&lt;br /&gt;
  MPI_Comm_size(MPI_COMM_WORLD,&amp;amp;size);&lt;br /&gt;
  MPI_Get_processor_name(processor_name, &amp;amp;namelen);&lt;br /&gt;
  printf(&amp;quot;Hello MPI! Process %d of %d on %s\n&amp;quot;, rank, size, processor_name);&lt;br /&gt;
  MPI_Finalize();&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before compiling, make sure that the compilers that are required available.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module list&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To avoid conflicts between libraries, the safest way is purging all modules:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module purge&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The load both gcc and openmpi libraries. If modules were purged, then slurm needs to be reloaded too.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load gcc/4.8.1 openmpi/gcc/64/1.6.5 slurm/2.5.7&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Compile the &amp;lt;code&amp;gt;hello_mpi.c&amp;lt;/code&amp;gt; code.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mpicc hello_mpi.c -o test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If desired, a list of libraries compiled into the executable can be viewed:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ldd test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  linux-vdso.so.1 =&amp;gt;  (0x00002aaaaaacb000)&lt;br /&gt;
  libmpi.so.1 =&amp;gt; /cm/shared/apps/openmpi/gcc/64/1.6.5/lib64/libmpi.so.1 (0x00002aaaaaccd000)&lt;br /&gt;
  libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002aaaab080000)&lt;br /&gt;
  libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x00002aaaab284000)&lt;br /&gt;
  libnuma.so.1 =&amp;gt; /usr/lib64/libnuma.so.1 (0x0000003e29400000)&lt;br /&gt;
  librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002aaaab509000)&lt;br /&gt;
  libnsl.so.1 =&amp;gt; /lib64/libnsl.so.1 (0x00002aaaab711000)&lt;br /&gt;
  libutil.so.1 =&amp;gt; /lib64/libutil.so.1 (0x00002aaaab92a000)&lt;br /&gt;
  libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002aaaabb2e000)&lt;br /&gt;
  libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002aaaabd4b000)&lt;br /&gt;
  /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)&lt;br /&gt;
&lt;br /&gt;
Running the executable on two nodes, with four tasks per node, can be done like this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun --nodes=2 --ntasks-per-node=4 --partition=ABGC --mpi=openmpi ./test_hello_world&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will result in the following output:&lt;br /&gt;
  Hello MPI! Process 4 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 1 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 7 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 6 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 5 of 8 on node011&lt;br /&gt;
  Hello MPI! Process 2 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 0 of 8 on node010&lt;br /&gt;
  Hello MPI! Process 3 of 8 on node010&lt;br /&gt;
&lt;br /&gt;
== A mvapich2 sbatch example ==&lt;br /&gt;
A mpi job using mvapich2 on 32 cores, using the normal compute nodes and the fast infiniband interconnect for RDMA traffic.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ module load mvapich2/gcc&lt;br /&gt;
$ vim batch.sh&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #SBATCH --comment=projectx&lt;br /&gt;
 #SBATCH --time=30-0&lt;br /&gt;
 #SBATCH  -n 32&lt;br /&gt;
 #SBATCH --constraint=4gpercpu&lt;br /&gt;
 #SBATCH --output=output_%j.txt&lt;br /&gt;
 #SBATCH --error=error_output_%j.txt&lt;br /&gt;
 #SBATCH --job-name=MPItest&lt;br /&gt;
 #SBATCH --mail-type=ALL&lt;br /&gt;
 #SBATCH --mail-user=user@wur.nl&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;Starting at `date`&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on hosts: $SLURM_NODELIST&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on $SLURM_NNODES nodes.&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on $SLURM_NPROCS processors.&amp;quot;&lt;br /&gt;
 echo &amp;quot;Current working directory is `pwd`&amp;quot;&lt;br /&gt;
 # echo &amp;quot;Env var MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE is $MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE&amp;quot;&lt;br /&gt;
 # export MPIR_CVAR_NEMESIS_TCP_NETWORK_IFACE=ib0&lt;br /&gt;
&lt;br /&gt;
 mpirun -iface ib0 -np 32 ./tmf_par.out -NX 480 -NY 240 -alpha  11 -chi 1.3 -psi_b 5e-2  -beta  0.0 -zeta 3.5 -kT 0.10 &lt;br /&gt;
&lt;br /&gt;
 echo &amp;quot;Program finished with exit code $? at: `date`&amp;quot;&lt;br /&gt;
&lt;br /&gt;
$ sbatch batch.sh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Provean_Sus_scrofa&amp;diff=2044</id>
		<title>Provean Sus scrofa</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Provean_Sus_scrofa&amp;diff=2044"/>
		<updated>2019-07-15T15:02:03Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the procedure for mapping all known variants (batch of first 150 pigs, wild boar re-sequencing) at the ABGC. &lt;br /&gt;
&lt;br /&gt;
== Pre-requisites ==&lt;br /&gt;
From Variant Effect Predictor output, select only protein altering variants and sort by transcript:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cat outVEP_*.txt | awk &#039;$11~/\//&#039; | sed &#039;s/:/\t/&#039; | sort -k6 &amp;gt;prot_alt.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Protein models for Sus scrofa:&lt;br /&gt;
  /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa&lt;br /&gt;
&lt;br /&gt;
== Automated procedure for mapping ==&lt;br /&gt;
The Provean analysis is somewhat involved because of an apparent bug in the program that results in conflict in temporary files. This is particularly problematic when farming out thousands of individual searches (i.e. per protein sequence) on the cluster. The cluster nodes need periodic &#039;cleaning&#039; of those remaining temporary directories.&lt;br /&gt;
&lt;br /&gt;
=== Master script to control the submission of jobs and cleaning ===&lt;br /&gt;
The following script will add 300 runs every hour. Note that it will kill remaining Provean processes, and, importantly, will clean the &amp;lt;code&amp;gt;/tmp&amp;lt;/code&amp;gt; dirs of all nodes of remaining Provean related temporary folders. This to prevent the error message that Provean has problems creating temporary folders. &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
!/bin/bash&lt;br /&gt;
#SBATCH --time=4800&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem-per-cpu=16000&lt;br /&gt;
#SBATCH --nice=1000&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=Provean&lt;br /&gt;
#cat outVEP_*.txt | awk &#039;$11~/\//&#039; | sed &#039;s/:/\t/&#039; | sort -k6 &amp;gt;prot_alt.txt&lt;br /&gt;
TELLER=100&lt;br /&gt;
echo $TELLER;&lt;br /&gt;
let TELLER+=1;&lt;br /&gt;
echo $TELLER;&lt;br /&gt;
while [ $TELLER -gt 99 ]; do&lt;br /&gt;
&lt;br /&gt;
  PROVS=`squeue | grep Provean | sed &#039;s/^ \+//&#039; | sed &#039;s/ \+/\t/&#039; | cut -f1`;&lt;br /&gt;
  for PROV in $PROVS; do scancel $PROV; done;&lt;br /&gt;
  sleep 10;&lt;br /&gt;
  for i in `seq 1 2`; do ssh fat00$i &#039;rm -rf /tmp/provean*&#039;; done;&lt;br /&gt;
  for i in `seq 10 60`; do ssh node0$i &#039;rm -rf /tmp/provean*&#039;; done;&lt;br /&gt;
  for i in `seq 1 9`; do ssh node00$i &#039;rm -rf /tmp/provean*&#039;; done;&lt;br /&gt;
  TRANS=`cat prot_alt.txt | cut -f6 | sort | uniq`;&lt;br /&gt;
  TELLER2=0;&lt;br /&gt;
  for TRAN in $TRANS; do&lt;br /&gt;
     if [ $TELLER2 -lt 300 ]; then&lt;br /&gt;
       echo &amp;quot;transcript: $TRAN&amp;quot;;&lt;br /&gt;
       echo &amp;quot;teller boven: $TELLER2&amp;quot;;&lt;br /&gt;
       PROT=`cat /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa | grep $TRAN | sed &#039;s/ \+/\t/g&#039; | sed &#039;s/^&amp;gt;//&#039; | cut -f1`;&lt;br /&gt;
       echo &amp;quot;protein: $PROT&amp;quot;;&lt;br /&gt;
       if [ -f $PROT.sss ];&lt;br /&gt;
        then&lt;br /&gt;
          echo &amp;quot;$PROT $TRAN already done&amp;quot;;&lt;br /&gt;
        else&lt;br /&gt;
          echo &amp;quot;will do sbatch testProvean_sub.sh $TRAN&amp;quot;;&lt;br /&gt;
          sbatch runProvean_sub.sh $TRAN;&lt;br /&gt;
          let TELLER2+=1;&lt;br /&gt;
          echo &amp;quot;teller onder: $TELLER2&amp;quot;;&lt;br /&gt;
       fi;&lt;br /&gt;
    fi;&lt;br /&gt;
  done;&lt;br /&gt;
  sleep 3600;&lt;br /&gt;
done&lt;br /&gt;
                                                             &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The slave script that does the actual submission ===&lt;br /&gt;
The &#039;runProvean_sub.sh&#039; script referred to in the above script consists of the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=4800&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem-per-cpu=16000&lt;br /&gt;
#SBATCH --nice=1000&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=Provean&lt;br /&gt;
TRANS=$1&lt;br /&gt;
PROT=`cat /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa | grep $TRANS | sed &#039;s/ \+/\t/g&#039; | sed &#039;s/^&amp;gt;//&#039; | cut -f1`&lt;br /&gt;
cat prot_alt.txt | grep $TRANS | awk &#039;{print $11,$12}&#039; | sed &#039;s/ \+/\t/&#039; | sed &#039;s/\//\t/&#039; | awk &#039;{OFS=&amp;quot;,&amp;quot;; print $1,$2,$3}&#039; | sed &#039;s/\t//g&#039; | sed &#039;s/ \+//g&#039; &amp;gt;$TRANS.var;&lt;br /&gt;
cat prot_alt.txt | grep $TRANS | awk -v prot=$PROT &#039;{OFS=&amp;quot;\t&amp;quot;; print $1,$2,$3,$5,$6,$7,$8,prot, $11,$12,$13,$14,$15}&#039; &amp;gt;$PROT.var.info;&lt;br /&gt;
faOneRecord /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa $PROT &amp;gt;$PROT.fa;&lt;br /&gt;
mv $TRANS.var $PROT.var;&lt;br /&gt;
provean.sh -q $PROT.fa -v $PROT.var --save_supporting_set $PROT.sss &amp;gt;$PROT.result.txt 2&amp;gt;$PROT.error;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Alternative: submission per transcript - no cleaning ==&lt;br /&gt;
Individual transcripts can also be submitted using the following script:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=4800&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --mem-per-cpu=16000&lt;br /&gt;
#SBATCH --nice=1000&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=Provean&lt;br /&gt;
#SBATCH --partition=ABGC_Research&lt;br /&gt;
#cat outVEP_*.txt | awk &#039;$11~/\//&#039; | sed &#039;s/:/\t/&#039; | sort -k6 &amp;gt;prot_alt.txt&lt;br /&gt;
TRANS=$1&lt;br /&gt;
PROT=`cat /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa | grep $TRANS | sed &#039;s/ \+/\t/g&#039; | sed &#039;s/^&amp;gt;//&#039; | cut -f1`&lt;br /&gt;
if [ -f $PROT.sss ];&lt;br /&gt;
  then&lt;br /&gt;
  echo &amp;quot;$PROT $TRANS already done.&amp;quot;;&lt;br /&gt;
  else&lt;br /&gt;
  cat prot_alt.txt | grep $TRANS | awk &#039;{print $11,$12}&#039; | sed &#039;s/ \+/\t/&#039; | sed &#039;s/\//\t/&#039; | awk &#039;{OFS=&amp;quot;,&amp;quot;; print $1,$2,$3}&#039; | sed &#039;s/\t//g&#039; | sed &#039;s/ \+//g&#039; &amp;gt;$TRANS.var;&lt;br /&gt;
  cat prot_alt.txt | grep $TRANS | awk -v prot=$PROT &#039;{OFS=&amp;quot;\t&amp;quot;; print $1,$2,$3,$5,$6,$7,$8,prot, $11,$12,$13,$14,$15}&#039; &amp;gt;$PROT.var.info;&lt;br /&gt;
  faOneRecord /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/pig/Ensembl74/pep/Sus_scrofa.Sscrofa10.2.74.pep.all.fa $PROT &amp;gt;$PROT.fa;&lt;br /&gt;
  mv $TRANS.var $PROT.var;&lt;br /&gt;
  provean.sh -q $PROT.fa -v $PROT.var --save_supporting_set $PROT.sss &amp;gt;$PROT.result.txt 2&amp;gt;$PROT.error;&lt;br /&gt;
fi;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Provean_1.1.3 | Provean on Anunna]]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Maker_protocols_Pmajor&amp;diff=2043</id>
		<title>Maker protocols Pmajor</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Maker_protocols_Pmajor&amp;diff=2043"/>
		<updated>2019-07-15T15:01:44Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page describes the various rounds of [http://www.yandell-lab.org/software/maker.html Maker]-based annotations for the [http://en.wikipedia.org/wiki/Parus_major &#039;&#039;Parus major&#039;&#039; (Great Tit)] genome.&lt;br /&gt;
&lt;br /&gt;
== Round 1 == &lt;br /&gt;
=== Rationale ===&lt;br /&gt;
For this round no P. major-based ESTs were available. Zebrafinch (T. guttata) is the closest relative for which a reasonably complete gene-model set is available. As a first pass, it was decided to let gene predictions be driven by ab-inititio predictions rather than by Zebrafinch EST. &lt;br /&gt;
&lt;br /&gt;
=== Invoking maker script ===&lt;br /&gt;
Do not forget to load the &amp;lt;code&amp;gt;maker&amp;lt;/code&amp;gt; module:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load maker/2.28 &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
script submitted by SLURM (&amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; command):&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=48000&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks=16&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=test_maker&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=hendrik-jan.megens@wur.nl&lt;br /&gt;
maker&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Maker settings ===&lt;br /&gt;
==== content of &amp;lt;code&amp;gt;maker_opts.ctl&amp;lt;/code&amp;gt; ====&lt;br /&gt;
 #-----Genome (these are always required)&lt;br /&gt;
  genome=Pam.fa #genome sequence (fasta file or fasta embeded in GFF3 file)&lt;br /&gt;
  organism_type=eukaryotic #eukaryotic or prokaryotic. Default is eukaryotic&lt;br /&gt;
  &lt;br /&gt;
  #-----Re-annotation Using MAKER Derived GFF3&lt;br /&gt;
  maker_gff= #MAKER derived GFF3 file&lt;br /&gt;
  est_pass=0 #use ESTs in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  altest_pass=0 #use alternate organism ESTs in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  protein_pass=0 #use protein alignments in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  rm_pass=0 #use repeats in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  model_pass=0 #use gene models in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  pred_pass=0 #use ab-initio predictions in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  other_pass=0 #passthrough anyything else in maker_gff: 1 = yes, 0 = no&lt;br /&gt;
  &lt;br /&gt;
  #-----EST Evidence (for best results provide a file for at least one)&lt;br /&gt;
  est= #set of ESTs or assembled mRNA-seq in fasta format&lt;br /&gt;
  altest= Taeniopygia_guttata.taeGut3.2.4.74.cdna.all.fa #EST/cDNA sequence file in fasta format from an alternate organism&lt;br /&gt;
  est_gff= #aligned ESTs or mRNA-seq from an external GFF3 file&lt;br /&gt;
  altest_gff= #aligned ESTs from a closly relate species in GFF3 format&lt;br /&gt;
  &lt;br /&gt;
  #-----Protein Homology Evidence (for best results provide a file for at least one)&lt;br /&gt;
  protein= Taeniopygia_guttata.taeGut3.2.4.74.pep.all.fa #protein sequence file in fasta format (i.e. from mutiple oransisms)&lt;br /&gt;
  protein_gff=  #aligned protein homology evidence from an external GFF3 file&lt;br /&gt;
  &lt;br /&gt;
  #-----Repeat Masking (leave values blank to skip repeat masking)&lt;br /&gt;
  model_org=Metazoa #select a model organism for RepBase masking in RepeatMasker&lt;br /&gt;
  rmlib= #provide an organism specific repeat library in fasta format for RepeatMasker&lt;br /&gt;
  repeat_protein= #provide a fasta file of transposable element proteins for RepeatRunner&lt;br /&gt;
  rm_gff= #pre-identified repeat elements from an external GFF3 file&lt;br /&gt;
  prok_rm=0 #forces MAKER to repeatmask prokaryotes (no reason to change this), 1 = yes, 0 = no&lt;br /&gt;
  softmask=1 #use soft-masking rather than hard-masking in BLAST (i.e. seg and dust filtering)&lt;br /&gt;
  &lt;br /&gt;
  #-----Gene Prediction&lt;br /&gt;
  snaphmm= /cm/shared/apps/WUR/ABGC/snap/snap-2013-11-29/HMM/mam54.hmm #SNAP HMM file&lt;br /&gt;
  gmhmm= #GeneMark HMM file&lt;br /&gt;
  augustus_species= chicken #Augustus gene prediction species model&lt;br /&gt;
  fgenesh_par_file= #FGENESH parameter file&lt;br /&gt;
  pred_gff= #ab-initio predictions from an external GFF3 file&lt;br /&gt;
  model_gff= #annotated gene models from an external GFF3 file (annotation pass-through)&lt;br /&gt;
  est2genome=0 #infer gene predictions directly from ESTs, 1 = yes, 0 = no&lt;br /&gt;
  protein2genome=0 #infer predictions from protein homology, 1 = yes, 0 = no&lt;br /&gt;
  unmask=0 #also run ab-initio prediction programs on unmasked sequence, 1 = yes, 0 = no&lt;br /&gt;
  &lt;br /&gt;
  #-----Other Annotation Feature Types (features MAKER doesn&#039;t recognize)&lt;br /&gt;
  other_gff= #extra features to pass-through to final MAKER generated GFF3 file&lt;br /&gt;
  &lt;br /&gt;
  #-----External Application Behavior Options&lt;br /&gt;
  alt_peptide=C #amino acid used to replace non-standard amino acids in BLAST databases&lt;br /&gt;
  cpus=16 #max number of cpus to use in BLAST and RepeatMasker (not for MPI, leave 1 when using MPI)&lt;br /&gt;
  &lt;br /&gt;
  #-----MAKER Behavior Options&lt;br /&gt;
  max_dna_len=100000 #length for dividing up contigs into chunks (increases/decreases memory usage)&lt;br /&gt;
  min_contig=1 #skip genome contigs below this length (under 10kb are often useless)&lt;br /&gt;
  &lt;br /&gt;
  pred_flank=200 #flank for extending evidence clusters sent to gene predictors&lt;br /&gt;
  pred_stats=0 #report AED and QI statistics for all predictions as well as models&lt;br /&gt;
  AED_threshold=1 #Maximum Annotation Edit Distance allowed (bound by 0 and 1)&lt;br /&gt;
  min_protein=0 #require at least this many amino acids in predicted proteins&lt;br /&gt;
  alt_splice=0 #Take extra steps to try and find alternative splicing, 1 = yes, 0 = no&lt;br /&gt;
  always_complete=0 #extra steps to force start and stop codons, 1 = yes, 0 = no&lt;br /&gt;
  map_forward=0 #map names and attributes forward from old GFF3 genes, 1 = yes, 0 = no&lt;br /&gt;
  keep_preds=0 #Concordance threshold to add unsupported gene prediction (bound by 0 and 1)&lt;br /&gt;
  &lt;br /&gt;
  split_hit=10000 #length for the splitting of hits (expected max intron size for evidence alignments)&lt;br /&gt;
  single_exon=0 #consider single exon EST evidence when generating annotations, 1 = yes, 0 = no&lt;br /&gt;
  single_length=250 #min length required for single exon ESTs if &#039;single_exon is enabled&#039;&lt;br /&gt;
  correct_est_fusion=0 #limits use of ESTs in annotation to avoid fusion genes&lt;br /&gt;
  &lt;br /&gt;
  tries=2 #number of times to try a contig if there is a failure for some reason&lt;br /&gt;
  clean_try=1 #remove all data from previous run before retrying, 1 = yes, 0 = no&lt;br /&gt;
  clean_up=1 #removes theVoid directory with individual analysis files, 1 = yes, 0 = no&lt;br /&gt;
  TMP= #specify a directory other than the system default temporary directory for temporary files&lt;br /&gt;
&lt;br /&gt;
==== content of &amp;lt;code&amp;gt;maker_exe.ctl&amp;lt;/code&amp;gt; ====&lt;br /&gt;
  #-----Location of Executables Used by MAKER/EVALUATOR&lt;br /&gt;
  makeblastdb=/cm/shared/apps/WUR/ABGC/blast/ncbi-blast-2.2.28+/bin/makeblastdb #location of NCBI+ makeblastdb executable&lt;br /&gt;
  blastn=/cm/shared/apps/WUR/ABGC/blast/ncbi-blast-2.2.28+/bin/blastn #location of NCBI+ blastn executable&lt;br /&gt;
  blastx=/cm/shared/apps/WUR/ABGC/blast/ncbi-blast-2.2.28+/bin/blastx #location of NCBI+ blastx executable&lt;br /&gt;
  tblastx=/cm/shared/apps/WUR/ABGC/blast/ncbi-blast-2.2.28+/bin/tblastx #location of NCBI+ tblastx executable&lt;br /&gt;
  formatdb= #location of NCBI formatdb executable&lt;br /&gt;
  blastall= #location of NCBI blastall executable&lt;br /&gt;
  xdformat= #location of WUBLAST xdformat executable&lt;br /&gt;
  blasta= #location of WUBLAST blasta executable&lt;br /&gt;
  RepeatMasker=/cm/shared/apps/WUR/ABGC/RepeatMasker/RepeatMasker-4-0-3/RepeatMasker #location of RepeatMasker executable&lt;br /&gt;
  exonerate=/cm/shared/apps/WUR/ABGC/exonerate/exonerate-2.2.0-x86_64/bin/exonerate #location of exonerate executable&lt;br /&gt;
  &lt;br /&gt;
  #-----Ab-initio Gene Prediction Algorithms&lt;br /&gt;
  snap=/cm/shared/apps/WUR/ABGC/snap/snap-2013-11-29/snap #location of snap executable&lt;br /&gt;
  gmhmme3= #location of eukaryotic genemark executable&lt;br /&gt;
  gmhmmp= #location of prokaryotic genemark executable&lt;br /&gt;
  augustus=/cm/shared/apps/WUR/ABGC/augustus/augustus.2.7/src/augustus #location of augustus executable&lt;br /&gt;
  fgenesh= #location of fgenesh executable&lt;br /&gt;
  &lt;br /&gt;
  #-----Other Algorithms&lt;br /&gt;
  probuild= #location of probuild executable (required for genemark)&lt;br /&gt;
&lt;br /&gt;
==== contents of &amp;lt;code&amp;gt;maker_bopts.ctl&amp;lt;/code&amp;gt;==== &lt;br /&gt;
  #-----BLAST and Exonerate Statistics Thresholds&lt;br /&gt;
  blast_type=ncbi+ #set to &#039;ncbi+&#039;, &#039;ncbi&#039; or &#039;wublast&#039;&lt;br /&gt;
  &lt;br /&gt;
  pcov_blastn=0.8 #Blastn Percent Coverage Threhold EST-Genome Alignments&lt;br /&gt;
  pid_blastn=0.85 #Blastn Percent Identity Threshold EST-Genome Aligments&lt;br /&gt;
  eval_blastn=1e-10 #Blastn eval cutoff&lt;br /&gt;
  bit_blastn=40 #Blastn bit cutoff&lt;br /&gt;
  depth_blastn=0 #Blastn depth cutoff (0 to disable cutoff)&lt;br /&gt;
  &lt;br /&gt;
  pcov_blastx=0.5 #Blastx Percent Coverage Threhold Protein-Genome Alignments&lt;br /&gt;
  pid_blastx=0.4 #Blastx Percent Identity Threshold Protein-Genome Aligments&lt;br /&gt;
  eval_blastx=1e-06 #Blastx eval cutoff&lt;br /&gt;
  bit_blastx=30 #Blastx bit cutoff&lt;br /&gt;
  depth_blastx=0 #Blastx depth cutoff (0 to disable cutoff)&lt;br /&gt;
  &lt;br /&gt;
  pcov_tblastx=0.8 #tBlastx Percent Coverage Threhold alt-EST-Genome Alignments&lt;br /&gt;
  pid_tblastx=0.85 #tBlastx Percent Identity Threshold alt-EST-Genome Aligments&lt;br /&gt;
  eval_tblastx=1e-10 #tBlastx eval cutoff&lt;br /&gt;
  bit_tblastx=40 #tBlastx bit cutoff&lt;br /&gt;
  depth_tblastx=0 #tBlastx depth cutoff (0 to disable cutoff)&lt;br /&gt;
  &lt;br /&gt;
  pcov_rm_blastx=0.5 #Blastx Percent Coverage Threhold For Transposable Element Masking&lt;br /&gt;
  pid_rm_blastx=0.4 #Blastx Percent Identity Threshold For Transposbale Element Masking&lt;br /&gt;
  eval_rm_blastx=1e-06 #Blastx eval cutoff for transposable element masking&lt;br /&gt;
  bit_rm_blastx=30 #Blastx bit cutoff for transposable element masking&lt;br /&gt;
  &lt;br /&gt;
  ep_score_limit=20 #Exonerate protein percent of maximal score threshold&lt;br /&gt;
  en_score_limit=20 #Exonerate nucleotide percent of maximal score threshold&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Maker_2.2.8 | Maker pipeline as installed on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://www.yandell-lab.org/software/maker.html Maker homepage]&lt;br /&gt;
* [http://gmod.org/wiki/MAKER_Tutorial_2013 Maker tutorial]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Migration_from_ESG_HPC&amp;diff=2042</id>
		<title>Migration from ESG HPC</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Migration_from_ESG_HPC&amp;diff=2042"/>
		<updated>2019-07-15T15:01:07Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Folders ==&lt;br /&gt;
home folder (200GB limit):&lt;br /&gt;
  /home/WUR/&amp;lt;user&amp;gt;/&lt;br /&gt;
lustre backup folder:&lt;br /&gt;
  /lustre/backup/WUR/ESG/&amp;lt;user&amp;gt;/&lt;br /&gt;
lustre no-backup folder:&lt;br /&gt;
  /lustre/nobackup/WUR/ESG/&amp;lt;user&amp;gt;/&lt;br /&gt;
/DATA folder&lt;br /&gt;
  /lustre/backup/WUR/ESG/data/&lt;br /&gt;
&lt;br /&gt;
== Example of a job script ==&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #SBATCH --comment=99999999 &lt;br /&gt;
  #SBATCH --time=1200&lt;br /&gt;
  #SBATCH --mem=2048&lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --output=output_%j.txt  &lt;br /&gt;
  #SBATCH --error=error_output_%j.txt&lt;br /&gt;
  #SBATCH --job-name=&amp;quot;test slurm&amp;quot; &lt;br /&gt;
  #SBATCH --nodes=5&lt;br /&gt;
  #SBATCH --mail-type=ALL&lt;br /&gt;
  #SBATCH --mail-user=wietse.franssen@wur.nl&lt;br /&gt;
  &lt;br /&gt;
  ./executable&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Assemble_mitochondrial_genomes_from_short_read_data&amp;diff=2041</id>
		<title>Assemble mitochondrial genomes from short read data</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Assemble_mitochondrial_genomes_from_short_read_data&amp;diff=2041"/>
		<updated>2019-07-15T15:00:46Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A simple procedure for assembling mitochondrial genomes based on whole-genome re-sequencing data. The first step is to extract reads from the sequence library based on a closely related entirely assembled genome (e.g., for pig, the MT genome as present in the genome build, but could also be of a related species). The genome is then assembled using SOAPdenovo.&lt;br /&gt;
&lt;br /&gt;
* a reference genome of a closely related population or species.&lt;br /&gt;
* a bowtie2 index (make with bowtie2_build)&lt;br /&gt;
* a blastable db of the reference mitochondrial genome&lt;br /&gt;
* a SOAPdenovo configuration file:&lt;br /&gt;
&lt;br /&gt;
soapdenovo.config&lt;br /&gt;
  [LIB]&lt;br /&gt;
  avg_ins=450&lt;br /&gt;
  reverse_seq=0&lt;br /&gt;
  asm_flags=1&lt;br /&gt;
  rank=3&lt;br /&gt;
  q1=fq1.fq&lt;br /&gt;
  q2=fq2.fq&lt;br /&gt;
Note that the avg_ins flag may vary between libraries; may have an effect on assembly efficiency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=1000&lt;br /&gt;
#SBATCH --mem=16000&lt;br /&gt;
#SBATCH --ntasks=8&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --constraint=4gpercpu&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=assemble_mito&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=hendrik-jan.megens@wur.nl&lt;br /&gt;
module load bowtie/2-2.2.1 SOAPdenovo2/r240 BLAST+/2.2.28 MUMmer/3.23 &lt;br /&gt;
&lt;br /&gt;
bowtie2 --phred$2 --local -p 8 -x mt_pig.fa -1 $3 -2 $4 | head -2 &amp;gt;$1_mito_align.sam&lt;br /&gt;
bowtie2 --phred$2 --local -p 8 -x mt_pig.fa -1 $3 -2 $4 | awk &#039;$5&amp;gt;0&#039; | head -10000 &amp;gt;&amp;gt;$1_mito_align.sam&lt;br /&gt;
&lt;br /&gt;
java7 -jar /cm/shared/apps/SHARED/picard-tools/picard-tools-1.109/SamToFastq.jar I=$1_mito_align.sam F=fq1.fq F2=fq2.fq INCLUDE_NON_PF_READS=True&lt;br /&gt;
&lt;br /&gt;
SOAPdenovo-63mer all -K 63 -p 4 -s soapdenovo.config -o $1_mito_assembly.fa&lt;br /&gt;
&lt;br /&gt;
blastn -query $1_mito_assembly.fa.scafSeq -db mt_pig.fa -outfmt 6&lt;br /&gt;
&lt;br /&gt;
mummer -mum -b -c mt_pig.fa $1_mito_assembly.fa.scafSeq &amp;gt; mummer.mums&lt;br /&gt;
mummerplot -postscript -p mummer mummer.mums&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Invoke like this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch do_mtalign_bowtie_pig.sh MA01F18 33\&lt;br /&gt;
 /lustre/nobackup/WUR/ABGC/shared/Pig/ABGSA/ABGSA0071/ABGSA0071_MA01F18_R1.PF.fastq.gz\&lt;br /&gt;
 /lustre/nobackup/WUR/ABGC/shared/Pig/ABGSA/ABGSA0071/ABGSA0071_MA01F18_R2.PF.fastq.gz&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Calculate_corrected_theta_from_resequencing_data&amp;diff=2040</id>
		<title>Calculate corrected theta from resequencing data</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Calculate_corrected_theta_from_resequencing_data&amp;diff=2040"/>
		<updated>2019-07-15T15:00:06Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure will estimate theta (nucleotide diversity) based on re-sequencing data. The method is describe in [http://www.biomedcentral.com/1471-2164/14/148 Esteve-Codina et al.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang =&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=10000&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --constraint=4gpercpu&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=ngstheta&lt;br /&gt;
module load samtools/0.1.19&lt;br /&gt;
VAR=`gunzip -c /lustre/nobackup/WUR/ABGC/shared/Pig/vars_hjm_newbuild10_2/vars-flt_$1-final.txt.gz  | cut -f8 | head -1000000 | sort | uniq -c | sed &#039;s/^ \+//&#039; | sed &#039;s/ \+/\t/&#039; | sort -k1 -nr | head -1 | cut -f2`&lt;br /&gt;
let MAX=2*VAR&lt;br /&gt;
echo &amp;quot;$1 max_depth is $MAX&amp;quot;&lt;br /&gt;
MIN=$(( $VAR / 3 ))&lt;br /&gt;
if [ $MIN -lt 5 ]; then MIN=4; fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$1 min_depth is $MIN&amp;quot;&lt;br /&gt;
samtools mpileup -uf /lustre/nobackup/WUR/ABGC/shared/Pig/Sscrofa_build10_2/FASTA/Ssc10_2_chromosomes.fa /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | bcftools view -bvcg - &amp;gt; $1.mig.bcf&lt;br /&gt;
bcftools view $1.mig.bcf | vcfutils.pl varFilter -d$MIN -D$MAX &amp;gt; $1.mig.vcf&lt;br /&gt;
awk &#039;$6 &amp;gt;= 20&#039; $1.mig.vcf &amp;gt; $1.miguel.vcf&lt;br /&gt;
samtools mpileup -Bq 20 -d 50000 /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | perl covXwin-v3.1.pl -v $1.miguel.vcf -w 50000 -d $MIN -m $MAX -b /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | ./ngs_theta -d $MIN -m $MAX &amp;gt; $1.wintheta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script can be submitted using &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; using the following code, assuming that the names of the individuals are listed in a file called &amp;lt;code&amp;gt;individuals.txt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
INDS=`cat individuals.txt`&lt;br /&gt;
for IND in $INDS; do sbatch nucdiv_pipeline.sh $IND; done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Average values for Theta were then extracted with the following R-srcript:&lt;br /&gt;
&amp;lt;source lang = &#039;rsplus&#039;&amp;gt;&lt;br /&gt;
files=list.files(pattern=&amp;quot;wintheta&amp;quot;)&lt;br /&gt;
a &amp;lt;- data.frame(&amp;quot;file&amp;quot; = character(), &amp;quot;theta_het&amp;quot; = numeric())&lt;br /&gt;
for (file1 in files){&lt;br /&gt;
   x &amp;lt;- read.table(file1,header=T); &lt;br /&gt;
   mn=mean(x$THETA_HET[x$BP&amp;gt;20000 &amp;amp; x$CHR != &#039;chrUN_nr&#039; &amp;amp; x$CHR != &#039;Ssc10_2_X&#039;]); &lt;br /&gt;
   print(paste(file1,mn,sep=&amp;quot;  &amp;quot;));&lt;br /&gt;
   a&amp;lt;- rbind(a,data.frame(&amp;quot;file&amp;quot;=file1,&amp;quot;theta_het&amp;quot;=mn))&lt;br /&gt;
}&lt;br /&gt;
write.table(x=a,file=&amp;quot;theta_het_results.txt&amp;quot;)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Calculate_corrected_theta_from_resequencing_data&amp;diff=2039</id>
		<title>Calculate corrected theta from resequencing data</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Calculate_corrected_theta_from_resequencing_data&amp;diff=2039"/>
		<updated>2019-07-15T14:59:53Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure will estimate theta (nucleotide diversity) based on re-sequencing data. The method is describe in [http://www.biomedcentral.com/1471-2164/14/148 Esteve-Codina et al.]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang =&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=10000&lt;br /&gt;
#SBATCH --mem=4000&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=ngstheta&lt;br /&gt;
module load samtools/0.1.19&lt;br /&gt;
VAR=`gunzip -c /lustre/nobackup/WUR/ABGC/shared/Pig/vars_hjm_newbuild10_2/vars-flt_$1-final.txt.gz  | cut -f8 | head -1000000 | sort | uniq -c | sed &#039;s/^ \+//&#039; | sed &#039;s/ \+/\t/&#039; | sort -k1 -nr | head -1 | cut -f2`&lt;br /&gt;
let MAX=2*VAR&lt;br /&gt;
echo &amp;quot;$1 max_depth is $MAX&amp;quot;&lt;br /&gt;
MIN=$(( $VAR / 3 ))&lt;br /&gt;
if [ $MIN -lt 5 ]; then MIN=4; fi&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;$1 min_depth is $MIN&amp;quot;&lt;br /&gt;
samtools mpileup -uf /lustre/nobackup/WUR/ABGC/shared/Pig/Sscrofa_build10_2/FASTA/Ssc10_2_chromosomes.fa /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | bcftools view -bvcg - &amp;gt; $1.mig.bcf&lt;br /&gt;
bcftools view $1.mig.bcf | vcfutils.pl varFilter -d$MIN -D$MAX &amp;gt; $1.mig.vcf&lt;br /&gt;
awk &#039;$6 &amp;gt;= 20&#039; $1.mig.vcf &amp;gt; $1.miguel.vcf&lt;br /&gt;
samtools mpileup -Bq 20 -d 50000 /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | perl covXwin-v3.1.pl -v $1.miguel.vcf -w 50000 -d $MIN -m $MAX -b /lustre/nobackup/WUR/ABGC/shared/Pig/BAM_files_hjm_newbuild10_2/$1_rh.bam | ./ngs_theta -d $MIN -m $MAX &amp;gt; $1.wintheta&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script can be submitted using &amp;lt;code&amp;gt;sbatch&amp;lt;/code&amp;gt; using the following code, assuming that the names of the individuals are listed in a file called &amp;lt;code&amp;gt;individuals.txt&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
INDS=`cat individuals.txt`&lt;br /&gt;
for IND in $INDS; do sbatch nucdiv_pipeline.sh $IND; done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Average values for Theta were then extracted with the following R-srcript:&lt;br /&gt;
&amp;lt;source lang = &#039;rsplus&#039;&amp;gt;&lt;br /&gt;
files=list.files(pattern=&amp;quot;wintheta&amp;quot;)&lt;br /&gt;
a &amp;lt;- data.frame(&amp;quot;file&amp;quot; = character(), &amp;quot;theta_het&amp;quot; = numeric())&lt;br /&gt;
for (file1 in files){&lt;br /&gt;
   x &amp;lt;- read.table(file1,header=T); &lt;br /&gt;
   mn=mean(x$THETA_HET[x$BP&amp;gt;20000 &amp;amp; x$CHR != &#039;chrUN_nr&#039; &amp;amp; x$CHR != &#039;Ssc10_2_X&#039;]); &lt;br /&gt;
   print(paste(file1,mn,sep=&amp;quot;  &amp;quot;));&lt;br /&gt;
   a&amp;lt;- rbind(a,data.frame(&amp;quot;file&amp;quot;=file1,&amp;quot;theta_het&amp;quot;=mn))&lt;br /&gt;
}&lt;br /&gt;
write.table(x=a,file=&amp;quot;theta_het_results.txt&amp;quot;)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=RNA-seq_analysis&amp;diff=2038</id>
		<title>RNA-seq analysis</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=RNA-seq_analysis&amp;diff=2038"/>
		<updated>2019-07-15T14:59:37Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Typical commands used for analyzing RNA-seq with Tophat (including Bowtie2 as aligner).=== &lt;br /&gt;
&lt;br /&gt;
* Examples are RNA-seq (stranded) from pig aligned against the pig reference genome (&#039;&#039;S. scrofa&#039;&#039; - 10.2)&lt;br /&gt;
&lt;br /&gt;
* Tophat, Bowtie2, Picard and GATK need to be in PATH or loaded as modules (e.g.: module load SHARED/bowtie/2-2.2.1; module load SHARED/tophat/2.0.11)&lt;br /&gt;
* Bowtie2 index (made with bowtie2_build) of reference genome (need only to be made ones)&lt;br /&gt;
* PCR duplicates removed with Picard&lt;br /&gt;
* For allelic expression and rna-editing re-aligning with GATK &lt;br /&gt;
&lt;br /&gt;
===For expression analysis===&lt;br /&gt;
* Allowing multiply hits (20 times)&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=2-12:00:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -c 16&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=AG_tophat&lt;br /&gt;
#SBATCH --mem=20000&lt;br /&gt;
&lt;br /&gt;
#Setting java tmp for slurm&lt;br /&gt;
export _JAVA_OPTIONS=-Djava.io.tmpdir=/lustre/scratch/WUR/ABGC/madse001/tmp&lt;br /&gt;
&lt;br /&gt;
#Brain_fontalloop&lt;br /&gt;
#Tophat alignment&lt;br /&gt;
tophat -p 16 -G ../ssc10_2_ens/Sscrofa10.2.68_ens_Tophat.gtf -M --rg-id agLW_BRfl --rg-sample ag_LW_BRfl --rg-platform Illumina --keep-fasta-order --read-realign-edit-dist 0 -r 120 --library-type fr-firststrand  \&lt;br /&gt;
-o tophat2_agLW_BRfl_g20_peO_reAln_ens_RG_M --mate-std-dev 250 ../ssc10_2_ens/ssc10_2_ens agLW_BRfl_read1.trimmed.final.fastq.gz agLW_BRfl_read2.trimmed.final.fastq.gz&lt;br /&gt;
&lt;br /&gt;
#Rename Tophat alignment file&lt;br /&gt;
mv tophat2_agLW_BRfl_g20_peO_reAln_ens_RG_M/accepted_hits.bam tophat2_agLW_BRfl_g20_peO_reAln_ens_RG_M/agLW_BRfl_g20_peO_reAln_ens_RG_M.bam&lt;br /&gt;
&lt;br /&gt;
#Remove PCR duplicates with Picard MarkDuplicates&lt;br /&gt;
java -Xms16g -jar ~/bin/MarkDuplicates.jar I=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M.bam O=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam \&lt;br /&gt;
M=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M.bam.out REMOVE_DUPLICATES=true&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===For allelic expression and rna-editing analysis===&lt;br /&gt;
* Unique alignment only&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --time=2-12:00:00&lt;br /&gt;
#SBATCH -n 1&lt;br /&gt;
#SBATCH -c 16&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=AG_tophat&lt;br /&gt;
#SBATCH --mem=20000&lt;br /&gt;
&lt;br /&gt;
#Setting java tmp for slurm&lt;br /&gt;
export _JAVA_OPTIONS=-Djava.io.tmpdir=/lustre/scratch/WUR/ABGC/madse001/tmp&lt;br /&gt;
&lt;br /&gt;
#Brain_fontalloop&lt;br /&gt;
#Tophat alignment&lt;br /&gt;
tophat -p 16 -g 1 -G ../ssc10_2_ens/Sscrofa10.2.68_ens_Tophat.gtf -M --rg-id agLW_BRfl --rg-sample ag_LW_BRfl --rg-platform Illumina --keep-fasta-order --read-realign-edit-dist 0 -r 120 --library-type fr-firststrand  \&lt;br /&gt;
-o tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M --mate-std-dev 250 ../ssc10_2_ens/ssc10_2_ens agLW_BRfl_read1.trimmed.final.fastq.gz agLW_BRfl_read2.trimmed.final.fastq.gz&lt;br /&gt;
&lt;br /&gt;
#Rename Tophat alignment file&lt;br /&gt;
mv tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/accepted_hits.bam tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M.bam&lt;br /&gt;
&lt;br /&gt;
#Remove PCR duplicates with Picard&lt;br /&gt;
java -Xms16g -jar ~/bin/MarkDuplicates.jar I=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M.bam O=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam \&lt;br /&gt;
M=tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M.bam.out REMOVE_DUPLICATES=true&lt;br /&gt;
&lt;br /&gt;
#Index BAM file for GATK analysis&lt;br /&gt;
samtools index tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bai&lt;br /&gt;
&lt;br /&gt;
#Re-align with GATK &lt;br /&gt;
java -Xms16g -jar ~/bin/GenomeAnalysisTK.jar -T RealignerTargetCreator -R ../ssc10_2_ens/ssc10_2_ens.fa -I tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam \&lt;br /&gt;
-o tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/realigner.intervals&lt;br /&gt;
java -Xms16g -jar ~/bin/GenomeAnalysisTK.jar -T IndelRealigner -R ../ssc10_2_ens/ssc10_2_ens.fa -I tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam \&lt;br /&gt;
-targetIntervals tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/realigner.intervals -o tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp_reG.bam&lt;br /&gt;
&lt;br /&gt;
#Index final BAM file&lt;br /&gt;
samtools index tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp_reG.bam tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp_reG.bai&lt;br /&gt;
&lt;br /&gt;
#Remove tmp bam (bai) file&lt;br /&gt;
rm tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bam&lt;br /&gt;
rm tophat2_agLW_BRfl_g1_peO_reAln_ens_RG_M/agLW_BRfl_g1_peO_reAln_ens_RG_M_RDp.bai&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
===Some notes on Tophat options:===&lt;br /&gt;
* -g (number of alignments, default 20)&lt;br /&gt;
* -M (prefilter-multihits against genome)&lt;br /&gt;
* --rg-id; --rg-sample; --keep-fasta-order (needed for many downstream analysis like Picard and GATK)&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Array_jobs&amp;diff=2037</id>
		<title>Array jobs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Array_jobs&amp;diff=2037"/>
		<updated>2019-07-15T14:58:59Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM can simplify your efforts if you are planning on submitting multiple independent jobs in parallel. Rather than having to use sbatch multiple times, you can instead use an array job to run your job.&lt;br /&gt;
&lt;br /&gt;
Take the following example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%A.%a.txt&lt;br /&gt;
#SBATCH --error=error_%A.%a.txt&lt;br /&gt;
#SBATCH --time=10&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --cpus-per-task=1&lt;br /&gt;
#SBATCH --mem-per-cpu=4000&lt;br /&gt;
#SBATCH --array=0-9%4&lt;br /&gt;
&lt;br /&gt;
echo $SLURM_ARRAY_TASK_ID&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Let&#039;s break this down step by step:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%A.%a.txt&lt;br /&gt;
#SBATCH --error=error_%A.%a.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This makes sure your job outputs to a file called output_&amp;lt;Jobnumber&amp;gt;.&amp;lt;Arrayid&amp;gt;.txt, allowing you to track which array ID returned what.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-9%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This defined the array job itself. This specifies to run ten jobs, with array id&#039;s of 0 to 9, but not to allow more than 4 to run at once. The syntax for this allows you to specify exactly what ID&#039;s to use, for example:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=3,7-11&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
will only run array tasks with ID&#039;s of 3, 7, 8, 9, 10 and 11.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
echo $SLURM_ARRAY_TASK_ID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This will print to stdout (and get redirected to output_%A.%a.txt) the environment variable set by SLURM that indicates which Array ID this process has.&lt;br /&gt;
&lt;br /&gt;
So, once this job is run, we will end up with ten files, all called output_&amp;lt;jobid&amp;gt;.&amp;lt;n&amp;gt;.txt, containing the number n.&lt;br /&gt;
&lt;br /&gt;
== Two dimensional arrays? ==&lt;br /&gt;
Running an array such as above will result in a one dimensional string of jobs, for example, with --array=0-9, then&lt;br /&gt;
&lt;br /&gt;
SLURM_ARRAY_TASK_ID=[ 0  1  2  3  4  5  6  7  8  9 ]&lt;br /&gt;
&lt;br /&gt;
for each job. What if you need two variables to change instead of one?&lt;br /&gt;
&lt;br /&gt;
Well, there&#039;s a simple function called modulo that can solve this. Let&#039;s use an example with a modulo of 10, and an example number of 93:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
A=$((93 / 10)) # A = 9&lt;br /&gt;
B=$((93 % 10)) # B = 3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As you can see, this splits the number in half, allowing a job array of 0-99 to be made into two variables, traversing a 2D array. Bear in mind this always starts at 0, so if you need, say, A to be 1-5, and B to be 3-8, then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-29   ## 5*6 entries, thus 30, including 0 this is 0-29&lt;br /&gt;
A=$((SLURM_ARRAY_TASK_ID/5+1)) # A = [0-4]+1 = [1-5]&lt;br /&gt;
B=$((SLURM_ARRAY_TASK_ID%6+3)) # B = [0-5]+3 = [3-8]&lt;br /&gt;
mywork $A $B&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=2036</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=2036"/>
		<updated>2019-07-15T14:58:38Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
#SBATCH --qos=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The comment is an arbitrary string. The comment may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting by feature===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The HPC nodes have features associated with them, such as Intel CPU&#039;s, or the amount of memory per node. If you know that your job requires a specific architecture or memory size, you can elect to constrain your job to only these features.&lt;br /&gt;
&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
All features can be seen using:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scontrol show nodes | grep ActiveFeatures | sort | uniq&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===requesting specific resources===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --gres=gpu:1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
In order to be able to use specific hardware resources, you need to request a Generic Resource. Once you do this, one of the resources will be allocated to your job when they are available. In the above example, one GPU is requested for use.&lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=yourname001@wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Anunna | Anunna]]&lt;br /&gt;
* [[Using_Slurm#Batch_script | Submitting jobs to Slurm]]&lt;br /&gt;
* [[Array_jobs|Array job hints]]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2035</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2035"/>
		<updated>2019-07-15T14:57:48Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Quality of Service ===&lt;br /&gt;
When submitting a job, you may optionally assign a different Quality of Service to it. You can do this with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --qos=std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By default, jobs will use std, the standard quality.&lt;br /&gt;
&lt;br /&gt;
Optionally, you may elect to reduce the priority of your jobs to low. This comes with a limit of how long each job can be (8h) to prevent the cluster from being locked up entirely with low priority jobs.&lt;br /&gt;
&lt;br /&gt;
The high quality provides a higher priority to jobs (20) than std (10), or low (1). It is naturally more expensive.&lt;br /&gt;
&lt;br /&gt;
The highest priority goes to jobs in interactive quality (100), but you may not submit many jobs or many large jobs as this quality. This is exclusively for the use of immediate running jobs, ones that are going to have hands-on users behind them.&lt;br /&gt;
&lt;br /&gt;
Jobs may be restarted and rescheduled if a job with higher priority needs cluster resources, but as of right now, this is not occurring.&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
The cluster consists of multiple partitions of nodes that you can submit to. The primary one is &#039;main&#039;. There are other partitions as needed - current plans include &#039;gpu&#039;.&lt;br /&gt;
&lt;br /&gt;
You can see the partitions available with `sinfo`:&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
The default partition is &#039;main&#039;. This will work for most jobs.&lt;br /&gt;
&lt;br /&gt;
The default qos is &#039;std&#039;.&lt;br /&gt;
&lt;br /&gt;
The default cpu count is 1.&lt;br /&gt;
&lt;br /&gt;
The default run time for a job is &#039;&#039;&#039;1 hour&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The default memory limit is &#039;&#039;&#039;100MB per node&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
=== Using GPU ===&lt;br /&gt;
There are two GPU nodes, in order to run a job that uses GPU on one of these nodes, you can add &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --reservation=&#039;GPU&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. Without this parameter, your job won&#039;t run on one of these nodes.&lt;br /&gt;
&lt;br /&gt;
THIS IS PRONE TO CHANGE SHORTLY! [[User:Dawes001|Dawes001]] ([[User talk:Dawes001|talk]]) 14:57, 15 July 2019 (UTC)&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Courses&amp;diff=2033</id>
		<title>Courses</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Courses&amp;diff=2033"/>
		<updated>2019-06-27T08:29:50Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: /* HPC CUDA/AI Course - 2019-06-21 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming: HPC Basic Course - 2019-06-28==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 28th of June, aiming to help absolute beginners to begin to use the main job scheduler, SLURM. You can register [https://oneschool.wur.nl/Lists/Cursus/DispForm.aspx?ID=100 here].&lt;br /&gt;
&lt;br /&gt;
== Upcoming Linux Basic Course - 2019-06-27 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 13th of June, to help beginner Linux users gain some skills in using Linux. You can register for this course [https://www.wur.nl/en/activity/Linux-basic-course-on-13-June-2019.htm here]&lt;br /&gt;
&lt;br /&gt;
== HPC CUDA/AI Course - 2019-06-21 ==&lt;br /&gt;
&lt;br /&gt;
A course for interested users for deep learning and neural networks, combined with some deep level manipulation of graphics cards was given by Dell on the 21st of July.&lt;br /&gt;
&lt;br /&gt;
[[File:WUR_CUDA_210619.pdf|WUR CUDA Course]]&lt;br /&gt;
[[File:WUR_AI_101_210619.pdf|WUR AI Course 101]]&lt;br /&gt;
[[File:WUR_AI_201_210619.pdf|WUR AI Course 201]]&lt;br /&gt;
&lt;br /&gt;
[[File:WUR_CUDA_2_210619.pdf|WUR CUDA Course]]&lt;br /&gt;
[[File:WUR_Deep_Learning_Frameworks_210619.pdf|WUR Deep Learning Frameworks Primer]]&lt;br /&gt;
[[File:WUR_Deep_Learning_Lab_210619.pdf|WUR Deep Learning Lab]]&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2019-05-28 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 28th of May, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_advanced_course_20190506.pdf|Advanced Course 1]]&lt;br /&gt;
[[File:HPC_advanced_slides_20190528.pdf|Advanced Course 2]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2019-05-07 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 7th of May, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC basic course 20190506.pdf|Basic Course]]&lt;br /&gt;
&lt;br /&gt;
== Linux Basic Course - 2019-04-16 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 16th of April, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2018-10-16 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 16th of October, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_Slides_20181016.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_advanced_course_20181008.pdf|Advanced Course (Jeremie)]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2018-10-11 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 11th of October, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_basic_course_20181008.pdf|Basic Course]]&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Course - 2018-10-02 ==&lt;br /&gt;
&lt;br /&gt;
A course basic Linux usage was organised on the 2nd of October, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
[https://etherpad.lug.wur.nl/p/UpkF2KXDVh]&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2018-05-18 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 18th of May, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_20180518-GD.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2018-05-17 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 17th of May, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Course - 2018-04-19 ==&lt;br /&gt;
&lt;br /&gt;
A course basic Linux usage was organised on the 19th of April, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2017-11-09 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 9th of November, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_course_2017-11-08-JV.pdf|Advanced Course (Jeremie)]]&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_course_2017-11-08-GD.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Checkpointing_2017-11-08.pdf|Checkpointing]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2017-10-30 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 30th of October, aiming to help absolute beginners to enhance their ability to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
The slides for this course can be found here:&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_basic_course_20171025.pdf | Basic introduction to Linux]]&lt;br /&gt;
&lt;br /&gt;
== HPC Teaching - 2017-06-07 ==&lt;br /&gt;
&lt;br /&gt;
A course for was organised on the 7th of June, aiming to help absolute beginners (and moderately experienced users) to enhance their ability to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
The slides for this course can be found here:&lt;br /&gt;
&lt;br /&gt;
[[File:Connecting_with_Secure_Shell_to_the_HPC_20170606.pdf | Basic introduction to Linux]]&lt;br /&gt;
&lt;br /&gt;
[[File:Submitting_and_monitoring_jobs_on_the_HPC_20170602.pdf | Submitting and Monitoring Jobs]]&lt;br /&gt;
&lt;br /&gt;
== Old Courses ==&lt;br /&gt;
* [http://www.basgen.nl/sdac/ Sequence Data Analysis Course (Dec. 2012)]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_Deep_Learning_Lab_210619.pdf&amp;diff=2032</id>
		<title>File:WUR Deep Learning Lab 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_Deep_Learning_Lab_210619.pdf&amp;diff=2032"/>
		<updated>2019-06-27T08:28:39Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_Deep_Learning_Frameworks_210619.pdf&amp;diff=2031</id>
		<title>File:WUR Deep Learning Frameworks 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_Deep_Learning_Frameworks_210619.pdf&amp;diff=2031"/>
		<updated>2019-06-27T08:28:23Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_CUDA_2_210619.pdf&amp;diff=2030</id>
		<title>File:WUR CUDA 2 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_CUDA_2_210619.pdf&amp;diff=2030"/>
		<updated>2019-06-27T08:27:57Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Courses&amp;diff=2029</id>
		<title>Courses</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Courses&amp;diff=2029"/>
		<updated>2019-06-24T16:04:26Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upcoming: HPC Basic Course - 2019-06-28==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 28th of June, aiming to help absolute beginners to begin to use the main job scheduler, SLURM. You can register [https://oneschool.wur.nl/Lists/Cursus/DispForm.aspx?ID=100 here].&lt;br /&gt;
&lt;br /&gt;
== Upcoming Linux Basic Course - 2019-06-27 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 13th of June, to help beginner Linux users gain some skills in using Linux. You can register for this course [https://www.wur.nl/en/activity/Linux-basic-course-on-13-June-2019.htm here]&lt;br /&gt;
&lt;br /&gt;
== HPC CUDA/AI Course - 2019-06-21 ==&lt;br /&gt;
&lt;br /&gt;
A course for interested users for deep learning and neural networks, combined with some deep level manipulation of graphics cards was given by Dell on the 21st of July.&lt;br /&gt;
&lt;br /&gt;
[[File:WUR_CUDA_210619.pdf|WUR CUDA Course]]&lt;br /&gt;
[[File:WUR_AI_101_210619.pdf|WUR AI Course 101]]&lt;br /&gt;
[[File:WUR_AI_201_210619.pdf|WUR AI Course 201]]&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2019-05-28 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 28th of May, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_advanced_course_20190506.pdf|Advanced Course 1]]&lt;br /&gt;
[[File:HPC_advanced_slides_20190528.pdf|Advanced Course 2]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2019-05-07 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners will be organised on the 7th of May, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC basic course 20190506.pdf|Basic Course]]&lt;br /&gt;
&lt;br /&gt;
== Linux Basic Course - 2019-04-16 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 16th of April, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2018-10-16 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 16th of October, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_Slides_20181016.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_advanced_course_20181008.pdf|Advanced Course (Jeremie)]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2018-10-11 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 11th of October, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_basic_course_20181008.pdf|Basic Course]]&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Course - 2018-10-02 ==&lt;br /&gt;
&lt;br /&gt;
A course basic Linux usage was organised on the 2nd of October, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
[https://etherpad.lug.wur.nl/p/UpkF2KXDVh]&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2018-05-18 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 18th of May, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_20180518-GD.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2018-05-17 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 17th of May, aiming to help absolute beginners to begin to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Course - 2018-04-19 ==&lt;br /&gt;
&lt;br /&gt;
A course basic Linux usage was organised on the 19th of April, to help beginner Linux users gain some skills in using Linux.&lt;br /&gt;
&lt;br /&gt;
== HPC Advanced Course - 2017-11-09 ==&lt;br /&gt;
&lt;br /&gt;
A course for experienced users was organised on the 9th of November, aiming to brush up users on techniques for submitting unusual jobs, and help provide some more helpful hints and techniques.&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_course_2017-11-08-JV.pdf|Advanced Course (Jeremie)]]&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_Advanced_course_2017-11-08-GD.pdf|Advanced Course (Gwen)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Checkpointing_2017-11-08.pdf|Checkpointing]]&lt;br /&gt;
&lt;br /&gt;
== HPC Basic Course - 2017-10-30 ==&lt;br /&gt;
&lt;br /&gt;
A course for beginners was organised on the 30th of October, aiming to help absolute beginners to enhance their ability to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
The slides for this course can be found here:&lt;br /&gt;
&lt;br /&gt;
[[File:HPC_basic_course_20171025.pdf | Basic introduction to Linux]]&lt;br /&gt;
&lt;br /&gt;
== HPC Teaching - 2017-06-07 ==&lt;br /&gt;
&lt;br /&gt;
A course for was organised on the 7th of June, aiming to help absolute beginners (and moderately experienced users) to enhance their ability to use the main job scheduler, SLURM.&lt;br /&gt;
&lt;br /&gt;
The slides for this course can be found here:&lt;br /&gt;
&lt;br /&gt;
[[File:Connecting_with_Secure_Shell_to_the_HPC_20170606.pdf | Basic introduction to Linux]]&lt;br /&gt;
&lt;br /&gt;
[[File:Submitting_and_monitoring_jobs_on_the_HPC_20170602.pdf | Submitting and Monitoring Jobs]]&lt;br /&gt;
&lt;br /&gt;
== Old Courses ==&lt;br /&gt;
* [http://www.basgen.nl/sdac/ Sequence Data Analysis Course (Dec. 2012)]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_CUDA_210619.pdf&amp;diff=2028</id>
		<title>File:WUR CUDA 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_CUDA_210619.pdf&amp;diff=2028"/>
		<updated>2019-06-24T16:01:56Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_AI_101_210619.pdf&amp;diff=2027</id>
		<title>File:WUR AI 101 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_AI_101_210619.pdf&amp;diff=2027"/>
		<updated>2019-06-24T16:01:22Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:WUR_AI_201_210619.pdf&amp;diff=2026</id>
		<title>File:WUR AI 201 210619.pdf</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:WUR_AI_201_210619.pdf&amp;diff=2026"/>
		<updated>2019-06-24T16:00:59Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:Anunna_Flyer_2019.svg&amp;diff=2013</id>
		<title>File:Anunna Flyer 2019.svg</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:Anunna_Flyer_2019.svg&amp;diff=2013"/>
		<updated>2019-05-06T08:48:54Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2005</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2005"/>
		<updated>2019-04-23T08:30:08Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 150&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Reservations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per node per day (node/day)&lt;br /&gt;
|-&lt;br /&gt;
|€ 50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Notes==&lt;br /&gt;
&lt;br /&gt;
If you are a member of a group with a commitment, then these costs get deducted from that commitment. Typically we are fairly lax with enforcing limits - only once you get to around 150% of your commitment will we consider taking action (mainly coming to discuss things).&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
You are running a job that needs 4 cores, 32G of RAM and runs for 90 minutes in the Std partition. To run this, you over-request resources slightly, and execute in a job that requests 4 CPUs, 40G of RAM and with a time limit of 3 hours. Your job terminates early. Thus, your costs are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4 * 0.015 * 1.5 = 0.09 EUR for the CPU&lt;br /&gt;
&lt;br /&gt;
40 * 0.0015 * 1.5 = 0.09 EUR for the memory&lt;br /&gt;
&lt;br /&gt;
Total: 0.18 EUR&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2004</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2004"/>
		<updated>2019-04-23T08:25:47Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 150&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 100&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Reservations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per node per day (node/day)&lt;br /&gt;
|-&lt;br /&gt;
|€ 50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
If you are a member of a group with a commitment, then these costs get deducted from that commitment. Typically we are fairly lax with enforcing limits - only once you get to around 150% of your commitment will we consider taking action (mainly coming to discuss things).&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2003</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=2003"/>
		<updated>2019-04-04T08:44:14Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on Anunna is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
Every organization has 3 queues (in slurm called partitions) : a high, a standard and a low priority queue.&amp;lt;br&amp;gt;&lt;br /&gt;
The High queue provides the highest priority to jobs (20) then the standard queue (10). In the low priority queue (0)&amp;lt;br&amp;gt;&lt;br /&gt;
jobs will be resubmitted if a job with higer priority needs cluster resources and those resoruces are occupied by a Low queue jobs.&lt;br /&gt;
To find out which queues your account has been authorized for, type sinfo:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
ABGC_High      up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_High      up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_High      up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Std       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Std       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Std       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Low       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Low       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Low       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
There is no default queue, so you need to specify which queue to use when submitting a job.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The default run time for a job is 1 hour!&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Default memory limit is 100MB per node!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a sbatch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --comment=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (simple) ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs (complex) ===&lt;br /&gt;
Lets&#039;s say you have three job scripts that depend on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_1.sh #A simple initialisation script&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_2.sh #An array task&amp;lt;/source&amp;gt;&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;job_3.sh #Some finishing script, single run, after everything previous has finished&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can create a script to simultaneously submit each job with a dependency on each other:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;#!/bin/bash&lt;br /&gt;
JOB1=$(sbatch job_1.sh| rev | cut -d &#039; &#039; -f 1 | rev) #Get me the last space-separated element&lt;br /&gt;
&lt;br /&gt;
if ! [ &amp;quot;z$JOB1&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;First job submitted as jobid $JOB1&amp;quot;&lt;br /&gt;
  JOB2=$(sbatch --dependency=afterany:$JOB1 job_2.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB2&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Second job submitted as jobid $JOB2, following $JOB1&amp;quot;&lt;br /&gt;
  JOB3=$(sbatch --dependency=afterany:$JOB2 job_3.sh| rev | cut -d &#039; &#039; -f 1 | rev)&lt;br /&gt;
&lt;br /&gt;
  if ! [ &amp;quot;z$JOB3&amp;quot; == &amp;quot;z&amp;quot; ] ; then&lt;br /&gt;
  echo &amp;quot;Third job submitted as jobid $JOB3, following after every element of $JOB2&amp;quot;&lt;br /&gt;
&lt;br /&gt;
  fi&lt;br /&gt;
 fi&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will ensure that the subsequent jobs occur after any finishing of the former (even if they failed).&lt;br /&gt;
&lt;br /&gt;
Please see [https://slurm.schedmd.com/sbatch.html#OPT_dependency the sbatch documentation] for other options available to you. Note that aftercorr makes a subsequent array jobs array elements start after the correspondingly numbered ones from the previous job.&lt;br /&gt;
&lt;br /&gt;
=== Submitting array jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --array=0-10%4&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM allows you to submit multiple jobs using the same template. Further information about this can be found [[Array_jobs|here]].&lt;br /&gt;
&lt;br /&gt;
=== Using /tmp ===&lt;br /&gt;
There is a local disk of ~300G that can be used to temporarily stage some of your workload attached to each node. This is free to use, but please remember to clean up your data after usage.&lt;br /&gt;
&lt;br /&gt;
In order to be sure that you&#039;re able to use space in /tmp, you can add&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required size&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To your sbatch script. This will prevent your job from being run on nodes where there is no free space, or it&#039;s aimed to be used by another job at the same time.&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
login ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=login0:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@login jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@login jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sinteractive ==&lt;br /&gt;
sinteractive is a tiny wrapper on srun to create interactive jobs quickly and easily. It allows you to get a shell on one of the nodes, with similar limits as you would do for a normal job. To use it, simply run:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinteractive -c &amp;lt;num_cpus&amp;gt; --mem &amp;lt;amount_mem&amp;gt; --time &amp;lt;minutes&amp;gt; -p &amp;lt;partition&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
You will then be presented with a new shell prompt on one of the compute nodes (run &#039;hostname&#039; to see which!). From here, you can test out code in an interactive fashion as needs be.&lt;br /&gt;
&lt;br /&gt;
Be advised though - not filling in the above fields will get you a shell with 1 CPU and 100Mb of RAM for 1 hour. This is useful for quick testing, however.&lt;br /&gt;
&lt;br /&gt;
=== sinteractive source ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
srun &amp;quot;$@&amp;quot; -I60 -N 1 -n 1 --pty bash -i&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== interactive Slurm - using salloc ===&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on the login node, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,comment,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Comment   Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on Anunna ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on Anunna]]&lt;br /&gt;
&amp;lt; text here &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Understanding which resources are available to you: sinfo ==&lt;br /&gt;
By using the &#039;sinfo&#039; command you can retrieve information on which &#039;Partitions&#039; are available to you. A &#039;Partition&#039; using SLURM is similar to the &#039;queue&#039; when submitting using the Sun Grid Engine (&#039;qsub&#039;). The different Partitions grant different levels of resource allocation. Not all defined Partitions will be available to any given person. E.g., Master students will only have the Student Partition available, researchers at the ABGC will have &#039;student&#039;, &#039;research&#039;, and &#039;ABGC&#039; partitions available. The higher the level of  resource allocation, though, the higher the cost per compute-hour. The default Partition is the &#039;student&#039; partition. A full list of Partitions can be found from the Bright Cluster Manager webpage.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinfo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
  student*     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  student*     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  research     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  research     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  ABGC         up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  ABGC         up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
* [[B4F_cluster | Anunna]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on Anunna]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Filesystems&amp;diff=2002</id>
		<title>Filesystems</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Filesystems&amp;diff=2002"/>
		<updated>2019-04-04T08:43:24Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna currently has multiple filesystem mounts that are available cluster-wide:&lt;br /&gt;
&lt;br /&gt;
== Global ==&lt;br /&gt;
* /home - This mount uses NFS to mount the home directories directly from nfs01. Each user has a 200G quota for this filesystem, as it is regularly backed up to tape, and can reliably be restored from up to a week&#039;s history.&lt;br /&gt;
&lt;br /&gt;
* /cm/shared - This mount provides a consistent set of binaries for the entire cluster.&lt;br /&gt;
&lt;br /&gt;
* /lustre - This large mount uses the Lustre filesystem to provide files from multiple redundant servers. Access is provided per group, thus:&lt;br /&gt;
 /lustre/[level]/[partner]/[unit]&lt;br /&gt;
e.g.&lt;br /&gt;
 /lustre/backup/WUR/ABGC/&lt;br /&gt;
It comprises of three major parts (and some minor):&lt;br /&gt;
* /lustre/backup - In case of disaster, this data is stored a second time on a separate machine. Whilst this backup is purely in case of complete tragedy (such as some immense filesystem error, or multiple component failure), it can potentially be used to revert mistakes if you are very fast about reporting them. There is however no guarantee of this service.&lt;br /&gt;
* /lustre/nobackup - This is the &#039;normal&#039; filesystem for Lustre - no backups, just stored on the filesystem. Without having a backup needed, the cost of data here is not as much as under /lustre/backup, but in case of disaster cannot be recivered.&lt;br /&gt;
* /lustre/scratch - Files here may be removed after some time if the filesystem gets too full (Typically 30 days). You should tidy up this data yourself once work is complete.&lt;br /&gt;
* /lustre/shared - Same as /lustre/backup, except publicly available. This is where truly shared data lives that isn&#039;t assigned to a specific group.&lt;br /&gt;
&lt;br /&gt;
=== Private shared directories ===&lt;br /&gt;
If you are working with a group of users on a similar project, you might consider making a [[Shared_folders|Shared directory]] to coordinate. Information on how to do so is in the linked article.&lt;br /&gt;
&lt;br /&gt;
== Local ==&lt;br /&gt;
Specific to certain machines are some other filesystems that are available to you:&lt;br /&gt;
* /archive - an archive mount only accessible from the login nodes. Files here are sent to the Isilon for deeper storage. The cost of storing data here is much less than on the Lustre, but it cannot be used for compute work. This location is only available to WUR users. Files are able to be reverted via snapshot, and there is a separated backup, however this only comes in fortnightly (14 day) intervals.&lt;br /&gt;
&lt;br /&gt;
* /tmp - On each worker node there is a /tmp mount that can be used for temporary local caching. Be advised that you should clean this up, lest your files become a hindrance to other users. You can request a node with free space in your sbatch script like so:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --tmp=&amp;lt;required space&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* /dev/shm - On each worker you may also create a virtual filesystem directly into memory, for extremely fast data access. Be advised that this will count against the memory used for your job, but it is also the fastest available filesystem if needed.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://wiki.lustre.org/index.php/Main_Page Lustre website]&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2001</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2001"/>
		<updated>2019-04-04T08:42:41Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:lith010 | Jan van Lith (Wageningen UR,FB-IT, Infrastructure)]], [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:vaend001 | Catharina Vaendel(Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2000</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2000"/>
		<updated>2019-04-04T08:41:47Z</updated>

		<summary type="html">&lt;p&gt;Dawes001: Created page with &amp;quot;== Computing: Calculations (cores)== {| class=&amp;quot;wikitable&amp;quot; !Queue !CPU core hour !GB memory hour |- |Standard queue |€ 0.0150 |€ 0.0015 |- |High priority queue |€ 0.0200...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 150&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 200&lt;br /&gt;
|€ 100&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Dawes001</name></author>
	</entry>
</feed>