<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Megen002</id>
	<title>HPCwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Megen002"/>
	<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php/Special:Contributions/Megen002"/>
	<updated>2026-04-18T04:17:18Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2071</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2071"/>
		<updated>2020-03-02T16:51:32Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Using Anunna */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2070</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2070"/>
		<updated>2020-03-02T16:50:52Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Using Anunna */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
== testing ==&lt;br /&gt;
* [[New_page | my shiny new test page]]&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=New_page&amp;diff=2069</id>
		<title>New page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=New_page&amp;diff=2069"/>
		<updated>2020-03-02T16:49:20Z</updated>

		<summary type="html">&lt;p&gt;Megen002: Created page with &amp;quot;Now here is some stuff&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Now here is some stuff&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1779</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1779"/>
		<updated>2017-06-07T14:26:35Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* receiving mailed updates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting the normal or fat nodes===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=yourname001@wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | HPC Agrogenomics Cluster]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster#Batch_script | Redirect to SLURM on B4F cluster]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1778</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1778"/>
		<updated>2017-06-07T14:25:52Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* time limit */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting the normal or fat nodes===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | HPC Agrogenomics Cluster]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster#Batch_script | Redirect to SLURM on B4F cluster]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1777</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1777"/>
		<updated>2017-06-07T14:24:34Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* output (stderr,stdout) directed to file */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting the normal or fat nodes===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | HPC Agrogenomics Cluster]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster#Batch_script | Redirect to SLURM on B4F cluster]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1776</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1776"/>
		<updated>2017-06-07T14:22:39Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Explanation of used SBATCH parameters */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&lt;br /&gt;
===time limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===memory limit===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
===number of tasks===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&lt;br /&gt;
===constraints: selecting the normal or fat nodes===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
===output (stderr,stdout) directed to file===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===adding a job name===&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===partition for resource allocation===&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===receiving mailed updates===&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | HPC Agrogenomics Cluster]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster#Batch_script | Redirect to SLURM on B4F cluster]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1775</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1775"/>
		<updated>2017-06-07T14:17:44Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
== A skeleton Slurm script ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation of used SBATCH parameters==&lt;br /&gt;
=== Adding accounting information or project number ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | HPC Agrogenomics Cluster]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster#Batch_script | Redirect to SLURM on B4F cluster]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1774</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1774"/>
		<updated>2017-06-07T14:12:21Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Batch script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on the B4F Cluster is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
Every organization has 3 queues (in slurm called partitions) : a high, a standard and a low priority queue.&amp;lt;br&amp;gt;&lt;br /&gt;
The High queue provides the highest priority to jobs (20) then the standard queue (10). In the low priority queue (0)&amp;lt;br&amp;gt;&lt;br /&gt;
jobs will be resubmitted if a job with higer priority needs cluster resources and those resoruces are occupied by a Low queue jobs.&lt;br /&gt;
To find out which queues your account has been authorized for, type sinfo:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
ABGC_High      up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_High      up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_High      up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Std       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Std       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Std       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Low       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Low       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Low       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
There is no default queue, so you need to specify which queue to use when submitting a job.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The default run time for a job is 1 hour!&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Default memory limit is 100MB per node!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a batch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs01 ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=nfs01:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@nfs01 jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: salloc ==&lt;br /&gt;
It&#039;s possible to set up an interactive session using salloc. Run salloc as follows:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p &amp;lt;partition, say, ABGC_Low&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And because of the magic of SallocDefaultCommand, you will immediately be transported to a new prompt.&lt;br /&gt;
&lt;br /&gt;
Here, run &#039;hostname&#039; to see which node your shell has been transported to.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on nfs01, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,account,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Account  Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on B4F cluster ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on B4F Cluster]]&lt;br /&gt;
&amp;lt; text here &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Understanding which resources are available to you: sinfo ==&lt;br /&gt;
By using the &#039;sinfo&#039; command you can retrieve information on which &#039;Partitions&#039; are available to you. A &#039;Partition&#039; using SLURM is similar to the &#039;queue&#039; when submitting using the Sun Grid Engine (&#039;qsub&#039;). The different Partitions grant different levels of resource allocation. Not all defined Partitions will be available to any given person. E.g., Master students will only have the Student Partition available, researchers at the ABGC will have &#039;student&#039;, &#039;research&#039;, and &#039;ABGC&#039; partitions available. The higher the level of  resource allocation, though, the higher the cost per compute-hour. The default Partition is the &#039;student&#039; partition. A full list of Partitions can be found from the Bright Cluster Manager webpage.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinfo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
  student*     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  student*     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  research     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  research     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  ABGC         up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  ABGC         up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on B4F cluster]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1773</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1773"/>
		<updated>2017-06-07T14:12:05Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A skeleton Slurm script:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Explanation of used SBATCH parameters:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1772</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1772"/>
		<updated>2017-06-07T13:06:27Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A skeleton Slurm script:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1771</id>
		<title>Creating sbatch script</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Creating_sbatch_script&amp;diff=1771"/>
		<updated>2017-06-07T13:06:08Z</updated>

		<summary type="html">&lt;p&gt;Megen002: Created page with &amp;quot;A skeletion Slurm script:   &amp;lt;source lang=&amp;#039;bash&amp;#039;&amp;gt;  #-----------------------------Mail address----------------------------- #SBATCH --mail-user= #SBATCH --mail-type=ALL #-------...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A skeletion Slurm script:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Mail address-----------------------------&lt;br /&gt;
#SBATCH --mail-user=&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#-----------------------------Output files-----------------------------&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#-----------------------------Other information------------------------&lt;br /&gt;
#SBATCH --comment=&lt;br /&gt;
##SBATCH --account=&lt;br /&gt;
#-----------------------------Required resources-----------------------&lt;br /&gt;
#SBATCH --partition=&lt;br /&gt;
#SBATCH --time=0-0:0:0&lt;br /&gt;
#SBATCH --ntasks=&lt;br /&gt;
#SBATCH --cpus-per-task=&lt;br /&gt;
#SBATCH --mem-per-cpu=&lt;br /&gt;
&lt;br /&gt;
#-----------------------------Environment, Operations and Job steps----&lt;br /&gt;
#load modules&lt;br /&gt;
&lt;br /&gt;
#export variables&lt;br /&gt;
&lt;br /&gt;
#your job&lt;br /&gt;
&lt;br /&gt;
              &lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1770</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1770"/>
		<updated>2017-06-07T13:01:18Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Batch script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on the B4F Cluster is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
Every organization has 3 queues (in slurm called partitions) : a high, a standard and a low priority queue.&amp;lt;br&amp;gt;&lt;br /&gt;
The High queue provides the highest priority to jobs (20) then the standard queue (10). In the low priority queue (0)&amp;lt;br&amp;gt;&lt;br /&gt;
jobs will be resubmitted if a job with higer priority needs cluster resources and those resoruces are occupied by a Low queue jobs.&lt;br /&gt;
To find out which queues your account has been authorized for, type sinfo:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
ABGC_High      up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_High      up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_High      up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Std       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Std       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Std       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Low       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Low       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Low       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
There is no default queue, so you need to specify which queue to use when submitting a job.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The default run time for a job is 1 hour!&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Default memory limit is 100MB per node!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
[[Creating_sbatch_script | Main Article: Creating a batch script]]&lt;br /&gt;
&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Explanation of used SBATCH parameters:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs01 ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=nfs01:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@nfs01 jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: salloc ==&lt;br /&gt;
It&#039;s possible to set up an interactive session using salloc. Run salloc as follows:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p &amp;lt;partition, say, ABGC_Low&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And because of the magic of SallocDefaultCommand, you will immediately be transported to a new prompt.&lt;br /&gt;
&lt;br /&gt;
Here, run &#039;hostname&#039; to see which node your shell has been transported to.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on nfs01, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,account,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Account  Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on B4F cluster ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on B4F Cluster]]&lt;br /&gt;
&amp;lt; text here &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Understanding which resources are available to you: sinfo ==&lt;br /&gt;
By using the &#039;sinfo&#039; command you can retrieve information on which &#039;Partitions&#039; are available to you. A &#039;Partition&#039; using SLURM is similar to the &#039;queue&#039; when submitting using the Sun Grid Engine (&#039;qsub&#039;). The different Partitions grant different levels of resource allocation. Not all defined Partitions will be available to any given person. E.g., Master students will only have the Student Partition available, researchers at the ABGC will have &#039;student&#039;, &#039;research&#039;, and &#039;ABGC&#039; partitions available. The higher the level of  resource allocation, though, the higher the cost per compute-hour. The default Partition is the &#039;student&#039; partition. A full list of Partitions can be found from the Bright Cluster Manager webpage.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinfo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
  student*     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  student*     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  research     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  research     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  ABGC         up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  ABGC         up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on B4F cluster]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1769</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1769"/>
		<updated>2017-06-07T12:53:07Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Submitting jobs: sbatch */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on the B4F Cluster is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
Every organization has 3 queues (in slurm called partitions) : a high, a standard and a low priority queue.&amp;lt;br&amp;gt;&lt;br /&gt;
The High queue provides the highest priority to jobs (20) then the standard queue (10). In the low priority queue (0)&amp;lt;br&amp;gt;&lt;br /&gt;
jobs will be resubmitted if a job with higer priority needs cluster resources and those resoruces are occupied by a Low queue jobs.&lt;br /&gt;
To find out which queues your account has been authorized for, type sinfo:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
ABGC_High      up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_High      up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_High      up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Std       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Std       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Std       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Low       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Low       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Low       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
There is no default queue, so you need to specify which queue to use when submitting a job.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The default run time for a job is 1 hour!&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Default memory limit is 100MB per node!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Explanation of used SBATCH parameters:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the &amp;lt;tt&amp;gt;scontrol&amp;lt;/tt&amp;gt; command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs01 ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=nfs01:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@nfs01 jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: salloc ==&lt;br /&gt;
It&#039;s possible to set up an interactive session using salloc. Run salloc as follows:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p &amp;lt;partition, say, ABGC_Low&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And because of the magic of SallocDefaultCommand, you will immediately be transported to a new prompt.&lt;br /&gt;
&lt;br /&gt;
Here, run &#039;hostname&#039; to see which node your shell has been transported to.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t want your shell to be transported but want a new remote shell, do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
salloc -p ABGC_Low $SHELL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Now your shell will stay on nfs01, but you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
srun &amp;lt;command&amp;gt; &amp;amp;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
To submit tasks to this new shell!&lt;br /&gt;
&lt;br /&gt;
Be aware that the time limit of salloc is default 1 hour. If you intend to run jobs for longer times than this, you need to edit the settings for it. See: https://computing.llnl.gov/linux/slurm/salloc.html&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,account,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Account  Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on B4F cluster ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on B4F Cluster]]&lt;br /&gt;
&amp;lt; text here &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Understanding which resources are available to you: sinfo ==&lt;br /&gt;
By using the &#039;sinfo&#039; command you can retrieve information on which &#039;Partitions&#039; are available to you. A &#039;Partition&#039; using SLURM is similar to the &#039;queue&#039; when submitting using the Sun Grid Engine (&#039;qsub&#039;). The different Partitions grant different levels of resource allocation. Not all defined Partitions will be available to any given person. E.g., Master students will only have the Student Partition available, researchers at the ABGC will have &#039;student&#039;, &#039;research&#039;, and &#039;ABGC&#039; partitions available. The higher the level of  resource allocation, though, the higher the cost per compute-hour. The default Partition is the &#039;student&#039; partition. A full list of Partitions can be found from the Bright Cluster Manager webpage.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinfo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
  student*     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  student*     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  research     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  research     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  ABGC         up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  ABGC         up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on B4F cluster]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1768</id>
		<title>List of users</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1768"/>
		<updated>2017-06-07T12:45:34Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Alumni */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Active users ==&lt;br /&gt;
&lt;br /&gt;
List of users. Alphabetical order (last/family name). Provide some background information by adding information to the &#039;User:username&#039; page.&lt;br /&gt;
&lt;br /&gt;
* [[User:Barris01 | Wes Barris (Cobb)]]&lt;br /&gt;
* [[User:Basti015 | John Bastiaansen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Binsb003 | Rianne van Binsbergen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bosse014 | Mirte Bosse (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bouwm024 | Aniek Bouwman (WUR/ABGC)]]&lt;br /&gt;
* [[User:Brasc001 | Pim Brascamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Calus001 | Mario Calus (WUR/ABGC)]]&lt;br /&gt;
* [[User:dongen01 | Henk van Dongen (TOPIGS)]]&lt;br /&gt;
* [[User:Frans004 | Wietse Franssen (WUR/ESG)]]&lt;br /&gt;
* [[User:Haars001 | Jan van Haarst (WUR/PRI)]]&lt;br /&gt;
* [[User:Hulse002 | Ina Hulsegge (WUR/ABGC)]] &lt;br /&gt;
* [[User:Hulze001 | Alex Hulzebosch (WUR/ABGC)]]&lt;br /&gt;
* [[User:Lopes01 | Marcos Soares Lopes (TOPIGS)]]&lt;br /&gt;
* [[User:Madse001 | Ole Madsen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Megen002 | Hendrik-Jan Megens (WUR/ABGC)]]&lt;br /&gt;
* [[User:Nijve002 | Harm Nijveen (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Schroo01 | Chris Schrooten (CRV)]]&lt;br /&gt;
* [[User:Schur010 | Anouk Schurink (WUR/ABGC)]]&lt;br /&gt;
* [[User:Smit089 | Sandra Smit (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Vande018 | Jeremie Vandenplas (WUR/ABGC)]]&lt;br /&gt;
* [[User:Veerk001 | Roel Veerkamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Vereij01 | Addie Vereijken (Hendrix Genetics)]]&lt;br /&gt;
* [[User:derks047 | Martijn Derks (WUR/ABGC)]]&lt;br /&gt;
&lt;br /&gt;
== FB-ICT Management of the HPC == &lt;br /&gt;
&lt;br /&gt;
* [[User:Dawes001 | Gwen Dawes (WUR, FB-IT) - HPC System Adiministrator]]&lt;br /&gt;
* [[User:Janss115 | Stephen Janssen (WUR, FB-IT, Service Management)]]&lt;br /&gt;
* [[User:pollm001 | Koen Pollmann (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
&lt;br /&gt;
== Alumni ==&lt;br /&gt;
* [[User:Bohme001 | Andre ten Böhmer (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
* [[User:Frant001 | Laurent Frantz (WUR/ABGC)]]&lt;br /&gt;
* [[User:Herrer01 | Juanma Herrero (WUR/ABGC)]]&lt;br /&gt;
* [[User:paude004 | Yogesh Paudel (WUR/ABGC)]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1767</id>
		<title>List of users</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1767"/>
		<updated>2017-06-07T12:45:16Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Active users */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Active users ==&lt;br /&gt;
&lt;br /&gt;
List of users. Alphabetical order (last/family name). Provide some background information by adding information to the &#039;User:username&#039; page.&lt;br /&gt;
&lt;br /&gt;
* [[User:Barris01 | Wes Barris (Cobb)]]&lt;br /&gt;
* [[User:Basti015 | John Bastiaansen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Binsb003 | Rianne van Binsbergen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bosse014 | Mirte Bosse (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bouwm024 | Aniek Bouwman (WUR/ABGC)]]&lt;br /&gt;
* [[User:Brasc001 | Pim Brascamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Calus001 | Mario Calus (WUR/ABGC)]]&lt;br /&gt;
* [[User:dongen01 | Henk van Dongen (TOPIGS)]]&lt;br /&gt;
* [[User:Frans004 | Wietse Franssen (WUR/ESG)]]&lt;br /&gt;
* [[User:Haars001 | Jan van Haarst (WUR/PRI)]]&lt;br /&gt;
* [[User:Hulse002 | Ina Hulsegge (WUR/ABGC)]] &lt;br /&gt;
* [[User:Hulze001 | Alex Hulzebosch (WUR/ABGC)]]&lt;br /&gt;
* [[User:Lopes01 | Marcos Soares Lopes (TOPIGS)]]&lt;br /&gt;
* [[User:Madse001 | Ole Madsen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Megen002 | Hendrik-Jan Megens (WUR/ABGC)]]&lt;br /&gt;
* [[User:Nijve002 | Harm Nijveen (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Schroo01 | Chris Schrooten (CRV)]]&lt;br /&gt;
* [[User:Schur010 | Anouk Schurink (WUR/ABGC)]]&lt;br /&gt;
* [[User:Smit089 | Sandra Smit (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Vande018 | Jeremie Vandenplas (WUR/ABGC)]]&lt;br /&gt;
* [[User:Veerk001 | Roel Veerkamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Vereij01 | Addie Vereijken (Hendrix Genetics)]]&lt;br /&gt;
* [[User:derks047 | Martijn Derks (WUR/ABGC)]]&lt;br /&gt;
&lt;br /&gt;
== FB-ICT Management of the HPC == &lt;br /&gt;
&lt;br /&gt;
* [[User:Dawes001 | Gwen Dawes (WUR, FB-IT) - HPC System Adiministrator]]&lt;br /&gt;
* [[User:Janss115 | Stephen Janssen (WUR, FB-IT, Service Management)]]&lt;br /&gt;
* [[User:pollm001 | Koen Pollmann (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
&lt;br /&gt;
== Alumni ==&lt;br /&gt;
* [[User:Bohme001 | Andre ten Böhmer (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
* [[User:Herrer01 | Juanma Herrero (WUR/ABGC)]]&lt;br /&gt;
* [[User:paude004 | Yogesh Paudel (WUR/ABGC)]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=User:Megen002&amp;diff=1766</id>
		<title>User:Megen002</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=User:Megen002&amp;diff=1766"/>
		<updated>2017-06-07T12:42:20Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Hendrik-Jan Megens ==&lt;br /&gt;
&lt;br /&gt;
[[File:HJM.jpg|right]]&lt;br /&gt;
* Profile on [https://www.vcard.wur.nl/Views/Profile/View.aspx?id=4846 We@WUR]&lt;br /&gt;
* Profile on [http://www.linkedin.com/pub/hendrik-jan-megens/24/536/2b8 LinkedIn]&lt;br /&gt;
* Profile on [http://scholar.google.nl/citations?user=kGUIXOYAAAAJ Google Scholar]&lt;br /&gt;
&lt;br /&gt;
Assistant Professor at Wageningen University, Animal Breeding and Genomics Centre. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Scientific interests:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Coming from a wet-lab background I discovered I had more talent for programming than for pipetting. I have moved into applied bioinformatics since 2004, while retaining focus on my research interests:&lt;br /&gt;
* evolutionary genomics (generation and maintenance of, and selection on, structural and single nucleotide variation; speciation and outbreeding depression; inbreeding depression and heterosis)&lt;br /&gt;
* population genetics (genetic consequences of population management, domestication, selection)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Main current projects&#039;&#039;&#039;&lt;br /&gt;
* Genome sequencing of pig and turkey.&lt;br /&gt;
* Re-sequencing projects on various livestock species&lt;br /&gt;
* Functional aspects of genome variation&lt;br /&gt;
* De novo assembly and annotation of genomes&lt;br /&gt;
&lt;br /&gt;
We are currently sequencing &amp;gt;300 pigs, wild boar, and outgroup species. The project aims to elucidate major patterns in biogeography and domestication of the pig, resulting from selection and demography.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Procedures&#039;&#039;&#039;&lt;br /&gt;
* Main sequencing platform is Illumina (We started in 2008 on Solexa GA, to currently Illumina HiSeq).&lt;br /&gt;
* Depending on research questions various short-read mapping programs are used (Mosaik, BWA, BWA/Stampy, MrsFAST)&lt;br /&gt;
* Variant calling (Samtools, GATK).&lt;br /&gt;
* Functional analysis of variants (Annovar, customly scripted tools)&lt;br /&gt;
* Various population- and phylogenomic approaches to tackle specific questions (RAxML, Beagle, coalhmm, etc., and customly scripted tools)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Main programming languages:&#039;&#039;&#039;&lt;br /&gt;
* Python&lt;br /&gt;
* R&lt;br /&gt;
* Linux shell scripting&lt;br /&gt;
* Perl (once upon a time....)&lt;br /&gt;
* SQL&lt;br /&gt;
&lt;br /&gt;
Favorite distros:  Ubuntu, Debian, Fedora, CentOS, Raspbian&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[List_of_users | List of users of the HPC Agrogenomics]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1752</id>
		<title>JBrowse</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1752"/>
		<updated>2016-12-16T13:03:21Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Install JBrowse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Typical commands used to set up a JBrowse === &lt;br /&gt;
&lt;br /&gt;
Author: Martijn Derks&lt;br /&gt;
&lt;br /&gt;
* JBrowse is available for multiple species:&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/pig/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/chicken/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/cattle/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/turkey/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/Cyprinus_carpio/&lt;br /&gt;
* Users are free to add usefull commands to this tutorial&lt;br /&gt;
&lt;br /&gt;
=== Install JBrowse ===&lt;br /&gt;
&lt;br /&gt;
Download the latest JBrowse here: http://jbrowse.org/&lt;br /&gt;
&lt;br /&gt;
Make a directory in &amp;lt;code&amp;gt;/cm/shared/apps/jbrowse/&amp;lt;/code&amp;gt; for your species of interested (e.g. &amp;lt;code&amp;gt;mkdir Cyprinus_carpio&amp;lt;/code&amp;gt;). Move the downloaded JBrowse source files there. All further procedures detailed in this Wiki page assume working from that directory (NOTE: if your species of interest is already there, contact the maintainer of that JBrowse instance).&lt;br /&gt;
Run the setup script to install perl dependencies and required modules&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
unzip JBrowse-1.12.0.zip&lt;br /&gt;
mv JBrowse-1.12.0/* $PWD&lt;br /&gt;
./setup.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add reference sequence ===&lt;br /&gt;
&lt;br /&gt;
Example code for chicken genome&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/prepare-refseqs.pl --fasta /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/chicken/Ensembl74/Gallus_gallus.Galgal4.74.dna.toplevel.fa&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To remove tracks use following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/remove-track.pl -D --trackLabel &#039;trackname&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Add annotation files (GFF/BED)===&lt;br /&gt;
&lt;br /&gt;
Data can be downloaded from the Ensembl FTP site: http://www.ensembl.org/info/data/ftp/index.html&lt;br /&gt;
&lt;br /&gt;
Add gene features:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Gene spans&amp;quot; --className feature5 --type gene --noSubfeatures --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --trackLabel Genes --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add corresponding transcripts:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Transcripts&amp;quot; --className transcript --subfeatureClasses &#039;{&amp;quot;exon&amp;quot;: &amp;quot;exon&amp;quot;, &amp;quot;CDS&amp;quot;: &amp;quot;CDS&amp;quot;, &amp;quot;five_prime_UTR&amp;quot;: &amp;quot;five_prime_UTR&amp;quot;, &amp;quot;three_prime_UTR&amp;quot;: &amp;quot;three_prime_UTR&amp;quot;}&#039; --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --type transcript --trackLabel Transcripts --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Alignment tracks (BAM)===&lt;br /&gt;
&lt;br /&gt;
You can load single BAM-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bam in /&amp;lt;dir&amp;gt;*.bam; do&lt;br /&gt;
        ln -s $bam track_symlinks/ ## Make symlinks from the BAM files&lt;br /&gt;
        ln -s $bam.bai track_symlinks/ ## Make symlinks to the BAM index files&lt;br /&gt;
        tissue=`echo $bam | rev | cut -c 5- | cut -d&#039;/&#039; -f1 | rev` ## USe the name of the file without .bam as trackLabel&lt;br /&gt;
        &lt;br /&gt;
        ## Add BAM in alignment mode (Alignments2)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;Alignments2&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
        ## Add BAM in coverage mode (SNPCoverage)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;SNPCoverage&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Variant tracks (VCF)===&lt;br /&gt;
&lt;br /&gt;
To load a VCF file in JBrowse make sure the file is gzipped and indexed&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
tabix -p vcf Gallus_gallus_incl_consequences.vcf.gz&lt;br /&gt;
&lt;br /&gt;
echo &#039; {&lt;br /&gt;
       &amp;quot;label&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;key&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/VCFTabix&amp;quot;,&lt;br /&gt;
       &amp;quot;urlTemplate&amp;quot; : &amp;quot;../../ensembl_data/VCF/Gallus_gallus_incl_consequences.vcf.gz&amp;quot;,&lt;br /&gt;
       &amp;quot;category&amp;quot; : &amp;quot;2. Variants&amp;quot;,&lt;br /&gt;
       &amp;quot;type&amp;quot; : &amp;quot;HTMLVariants&amp;quot;&lt;br /&gt;
     } &#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Wiggle/BigWig tracks (WIG)===&lt;br /&gt;
&lt;br /&gt;
You can load single BigWig-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bw in /lustre/nobackup/WUR/ABGC/shared/Chicken/Ensembl_Rna_Seq/*.bw; do&lt;br /&gt;
        ln -s $bw track_symlinks/&lt;br /&gt;
        tissue=`echo $bw | rev | cut -c 8- | cut -d&#039;/&#039; -f1 | rev`&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;BigWig&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}.bam.bw&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;JBrowse/View/Track/Wiggle/XYPlot&amp;quot;,&lt;br /&gt;
                &amp;quot;variance_band&amp;quot; : true&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evidence tracks===&lt;br /&gt;
&lt;br /&gt;
Evidence tracks can be loaded in bed, gff and gbk format using &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Examples are given above.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1751</id>
		<title>JBrowse</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1751"/>
		<updated>2016-12-16T12:56:54Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Typical commands used to set up a JBrowse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Typical commands used to set up a JBrowse === &lt;br /&gt;
&lt;br /&gt;
Author: Martijn Derks&lt;br /&gt;
&lt;br /&gt;
* JBrowse is available for multiple species:&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/pig/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/chicken/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/cattle/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/turkey/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/Cyprinus_carpio/&lt;br /&gt;
* Users are free to add usefull commands to this tutorial&lt;br /&gt;
&lt;br /&gt;
=== Install JBrowse ===&lt;br /&gt;
&lt;br /&gt;
Download the latest JBrowse here: http://jbrowse.org/&lt;br /&gt;
&lt;br /&gt;
Make a directory in &amp;lt;code&amp;gt;/cm/shared/apps/jbrowse/&amp;lt;/code&amp;gt; for your species of interested (e.g. &amp;lt;code&amp;gt;mkdir Cyprinus_carpio&amp;lt;/code&amp;gt;). Move the downloaded JBrowse source files there. All further procedures detailed in this Wiki page assume working from that directory (NOTE: if your species of interest is already there, contact the maintainer of that JBrowse instance).&lt;br /&gt;
Run the setup script to install perl dependencies and required modules&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
unzip JBrowse-1.12.0.zip&lt;br /&gt;
cd JBrowse-1.12.0&lt;br /&gt;
./setup.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add reference sequence ===&lt;br /&gt;
&lt;br /&gt;
Example code for chicken genome&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/prepare-refseqs.pl --fasta /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/chicken/Ensembl74/Gallus_gallus.Galgal4.74.dna.toplevel.fa&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To remove tracks use following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/remove-track.pl -D --trackLabel &#039;trackname&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Add annotation files (GFF/BED)===&lt;br /&gt;
&lt;br /&gt;
Data can be downloaded from the Ensembl FTP site: http://www.ensembl.org/info/data/ftp/index.html&lt;br /&gt;
&lt;br /&gt;
Add gene features:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Gene spans&amp;quot; --className feature5 --type gene --noSubfeatures --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --trackLabel Genes --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add corresponding transcripts:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Transcripts&amp;quot; --className transcript --subfeatureClasses &#039;{&amp;quot;exon&amp;quot;: &amp;quot;exon&amp;quot;, &amp;quot;CDS&amp;quot;: &amp;quot;CDS&amp;quot;, &amp;quot;five_prime_UTR&amp;quot;: &amp;quot;five_prime_UTR&amp;quot;, &amp;quot;three_prime_UTR&amp;quot;: &amp;quot;three_prime_UTR&amp;quot;}&#039; --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --type transcript --trackLabel Transcripts --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Alignment tracks (BAM)===&lt;br /&gt;
&lt;br /&gt;
You can load single BAM-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bam in /&amp;lt;dir&amp;gt;*.bam; do&lt;br /&gt;
        ln -s $bam track_symlinks/ ## Make symlinks from the BAM files&lt;br /&gt;
        ln -s $bam.bai track_symlinks/ ## Make symlinks to the BAM index files&lt;br /&gt;
        tissue=`echo $bam | rev | cut -c 5- | cut -d&#039;/&#039; -f1 | rev` ## USe the name of the file without .bam as trackLabel&lt;br /&gt;
        &lt;br /&gt;
        ## Add BAM in alignment mode (Alignments2)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;Alignments2&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
        ## Add BAM in coverage mode (SNPCoverage)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;SNPCoverage&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Variant tracks (VCF)===&lt;br /&gt;
&lt;br /&gt;
To load a VCF file in JBrowse make sure the file is gzipped and indexed&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
tabix -p vcf Gallus_gallus_incl_consequences.vcf.gz&lt;br /&gt;
&lt;br /&gt;
echo &#039; {&lt;br /&gt;
       &amp;quot;label&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;key&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/VCFTabix&amp;quot;,&lt;br /&gt;
       &amp;quot;urlTemplate&amp;quot; : &amp;quot;../../ensembl_data/VCF/Gallus_gallus_incl_consequences.vcf.gz&amp;quot;,&lt;br /&gt;
       &amp;quot;category&amp;quot; : &amp;quot;2. Variants&amp;quot;,&lt;br /&gt;
       &amp;quot;type&amp;quot; : &amp;quot;HTMLVariants&amp;quot;&lt;br /&gt;
     } &#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Wiggle/BigWig tracks (WIG)===&lt;br /&gt;
&lt;br /&gt;
You can load single BigWig-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bw in /lustre/nobackup/WUR/ABGC/shared/Chicken/Ensembl_Rna_Seq/*.bw; do&lt;br /&gt;
        ln -s $bw track_symlinks/&lt;br /&gt;
        tissue=`echo $bw | rev | cut -c 8- | cut -d&#039;/&#039; -f1 | rev`&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;BigWig&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}.bam.bw&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;JBrowse/View/Track/Wiggle/XYPlot&amp;quot;,&lt;br /&gt;
                &amp;quot;variance_band&amp;quot; : true&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evidence tracks===&lt;br /&gt;
&lt;br /&gt;
Evidence tracks can be loaded in bed, gff and gbk format using &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Examples are given above.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1750</id>
		<title>JBrowse</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1750"/>
		<updated>2016-12-16T12:13:40Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Install JBrowse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Typical commands used to set up a JBrowse === &lt;br /&gt;
&lt;br /&gt;
Author: Martijn Derks&lt;br /&gt;
&lt;br /&gt;
* JBrowse is available for multiple species:&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/pig/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/chicken/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/cattle/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/turkey/&lt;br /&gt;
* Users are free to add usefull commands to this tutorial&lt;br /&gt;
&lt;br /&gt;
=== Install JBrowse ===&lt;br /&gt;
&lt;br /&gt;
Download the latest JBrowse here: http://jbrowse.org/&lt;br /&gt;
&lt;br /&gt;
Make a directory in &amp;lt;code&amp;gt;/cm/shared/apps/jbrowse/&amp;lt;/code&amp;gt; for your species of interested (e.g. &amp;lt;code&amp;gt;mkdir Cyprinus_carpio&amp;lt;/code&amp;gt;). Move the downloaded JBrowse source files there. All further procedures detailed in this Wiki page assume working from that directory (NOTE: if your species of interest is already there, contact the maintainer of that JBrowse instance).&lt;br /&gt;
Run the setup script to install perl dependencies and required modules&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
unzip JBrowse-1.12.0.zip&lt;br /&gt;
cd JBrowse-1.12.0&lt;br /&gt;
./setup.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add reference sequence ===&lt;br /&gt;
&lt;br /&gt;
Example code for chicken genome&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/prepare-refseqs.pl --fasta /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/chicken/Ensembl74/Gallus_gallus.Galgal4.74.dna.toplevel.fa&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To remove tracks use following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/remove-track.pl -D --trackLabel &#039;trackname&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Add annotation files (GFF/BED)===&lt;br /&gt;
&lt;br /&gt;
Data can be downloaded from the Ensembl FTP site: http://www.ensembl.org/info/data/ftp/index.html&lt;br /&gt;
&lt;br /&gt;
Add gene features:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Gene spans&amp;quot; --className feature5 --type gene --noSubfeatures --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --trackLabel Genes --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add corresponding transcripts:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Transcripts&amp;quot; --className transcript --subfeatureClasses &#039;{&amp;quot;exon&amp;quot;: &amp;quot;exon&amp;quot;, &amp;quot;CDS&amp;quot;: &amp;quot;CDS&amp;quot;, &amp;quot;five_prime_UTR&amp;quot;: &amp;quot;five_prime_UTR&amp;quot;, &amp;quot;three_prime_UTR&amp;quot;: &amp;quot;three_prime_UTR&amp;quot;}&#039; --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --type transcript --trackLabel Transcripts --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Alignment tracks (BAM)===&lt;br /&gt;
&lt;br /&gt;
You can load single BAM-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bam in /&amp;lt;dir&amp;gt;*.bam; do&lt;br /&gt;
        ln -s $bam track_symlinks/ ## Make symlinks from the BAM files&lt;br /&gt;
        ln -s $bam.bai track_symlinks/ ## Make symlinks to the BAM index files&lt;br /&gt;
        tissue=`echo $bam | rev | cut -c 5- | cut -d&#039;/&#039; -f1 | rev` ## USe the name of the file without .bam as trackLabel&lt;br /&gt;
        &lt;br /&gt;
        ## Add BAM in alignment mode (Alignments2)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;Alignments2&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
        ## Add BAM in coverage mode (SNPCoverage)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;SNPCoverage&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Variant tracks (VCF)===&lt;br /&gt;
&lt;br /&gt;
To load a VCF file in JBrowse make sure the file is gzipped and indexed&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
tabix -p vcf Gallus_gallus_incl_consequences.vcf.gz&lt;br /&gt;
&lt;br /&gt;
echo &#039; {&lt;br /&gt;
       &amp;quot;label&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;key&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/VCFTabix&amp;quot;,&lt;br /&gt;
       &amp;quot;urlTemplate&amp;quot; : &amp;quot;../../ensembl_data/VCF/Gallus_gallus_incl_consequences.vcf.gz&amp;quot;,&lt;br /&gt;
       &amp;quot;category&amp;quot; : &amp;quot;2. Variants&amp;quot;,&lt;br /&gt;
       &amp;quot;type&amp;quot; : &amp;quot;HTMLVariants&amp;quot;&lt;br /&gt;
     } &#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Wiggle/BigWig tracks (WIG)===&lt;br /&gt;
&lt;br /&gt;
You can load single BigWig-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bw in /lustre/nobackup/WUR/ABGC/shared/Chicken/Ensembl_Rna_Seq/*.bw; do&lt;br /&gt;
        ln -s $bw track_symlinks/&lt;br /&gt;
        tissue=`echo $bw | rev | cut -c 8- | cut -d&#039;/&#039; -f1 | rev`&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;BigWig&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}.bam.bw&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;JBrowse/View/Track/Wiggle/XYPlot&amp;quot;,&lt;br /&gt;
                &amp;quot;variance_band&amp;quot; : true&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evidence tracks===&lt;br /&gt;
&lt;br /&gt;
Evidence tracks can be loaded in bed, gff and gbk format using &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Examples are given above.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1749</id>
		<title>JBrowse</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1749"/>
		<updated>2016-12-16T12:13:04Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Install JBrowse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Typical commands used to set up a JBrowse === &lt;br /&gt;
&lt;br /&gt;
Author: Martijn Derks&lt;br /&gt;
&lt;br /&gt;
* JBrowse is available for multiple species:&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/pig/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/chicken/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/cattle/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/turkey/&lt;br /&gt;
* Users are free to add usefull commands to this tutorial&lt;br /&gt;
&lt;br /&gt;
=== Install JBrowse ===&lt;br /&gt;
&lt;br /&gt;
Download the latest JBrowse here: http://jbrowse.org/&lt;br /&gt;
&lt;br /&gt;
Make a directory in &amp;lt;code&amp;gt;/cm/shared/apps/jbrowse/&amp;lt;/code&amp;gt; for your species of interested (e.g. Cyprinus_carpio). Move the downloaded JBrowse source files there. All further procedures detailed in this Wiki page assume working from that directory (NOTE: if your species of interest is already there, contact the maintainer of that JBrowse instance).&lt;br /&gt;
Run the setup script to install perl dependencies and required modules&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
unzip JBrowse-1.12.0.zip&lt;br /&gt;
cd JBrowse-1.12.0&lt;br /&gt;
./setup.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add reference sequence ===&lt;br /&gt;
&lt;br /&gt;
Example code for chicken genome&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/prepare-refseqs.pl --fasta /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/chicken/Ensembl74/Gallus_gallus.Galgal4.74.dna.toplevel.fa&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To remove tracks use following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/remove-track.pl -D --trackLabel &#039;trackname&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Add annotation files (GFF/BED)===&lt;br /&gt;
&lt;br /&gt;
Data can be downloaded from the Ensembl FTP site: http://www.ensembl.org/info/data/ftp/index.html&lt;br /&gt;
&lt;br /&gt;
Add gene features:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Gene spans&amp;quot; --className feature5 --type gene --noSubfeatures --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --trackLabel Genes --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add corresponding transcripts:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Transcripts&amp;quot; --className transcript --subfeatureClasses &#039;{&amp;quot;exon&amp;quot;: &amp;quot;exon&amp;quot;, &amp;quot;CDS&amp;quot;: &amp;quot;CDS&amp;quot;, &amp;quot;five_prime_UTR&amp;quot;: &amp;quot;five_prime_UTR&amp;quot;, &amp;quot;three_prime_UTR&amp;quot;: &amp;quot;three_prime_UTR&amp;quot;}&#039; --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --type transcript --trackLabel Transcripts --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Alignment tracks (BAM)===&lt;br /&gt;
&lt;br /&gt;
You can load single BAM-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bam in /&amp;lt;dir&amp;gt;*.bam; do&lt;br /&gt;
        ln -s $bam track_symlinks/ ## Make symlinks from the BAM files&lt;br /&gt;
        ln -s $bam.bai track_symlinks/ ## Make symlinks to the BAM index files&lt;br /&gt;
        tissue=`echo $bam | rev | cut -c 5- | cut -d&#039;/&#039; -f1 | rev` ## USe the name of the file without .bam as trackLabel&lt;br /&gt;
        &lt;br /&gt;
        ## Add BAM in alignment mode (Alignments2)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;Alignments2&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
        ## Add BAM in coverage mode (SNPCoverage)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;SNPCoverage&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Variant tracks (VCF)===&lt;br /&gt;
&lt;br /&gt;
To load a VCF file in JBrowse make sure the file is gzipped and indexed&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
tabix -p vcf Gallus_gallus_incl_consequences.vcf.gz&lt;br /&gt;
&lt;br /&gt;
echo &#039; {&lt;br /&gt;
       &amp;quot;label&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;key&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/VCFTabix&amp;quot;,&lt;br /&gt;
       &amp;quot;urlTemplate&amp;quot; : &amp;quot;../../ensembl_data/VCF/Gallus_gallus_incl_consequences.vcf.gz&amp;quot;,&lt;br /&gt;
       &amp;quot;category&amp;quot; : &amp;quot;2. Variants&amp;quot;,&lt;br /&gt;
       &amp;quot;type&amp;quot; : &amp;quot;HTMLVariants&amp;quot;&lt;br /&gt;
     } &#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Wiggle/BigWig tracks (WIG)===&lt;br /&gt;
&lt;br /&gt;
You can load single BigWig-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bw in /lustre/nobackup/WUR/ABGC/shared/Chicken/Ensembl_Rna_Seq/*.bw; do&lt;br /&gt;
        ln -s $bw track_symlinks/&lt;br /&gt;
        tissue=`echo $bw | rev | cut -c 8- | cut -d&#039;/&#039; -f1 | rev`&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;BigWig&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}.bam.bw&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;JBrowse/View/Track/Wiggle/XYPlot&amp;quot;,&lt;br /&gt;
                &amp;quot;variance_band&amp;quot; : true&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evidence tracks===&lt;br /&gt;
&lt;br /&gt;
Evidence tracks can be loaded in bed, gff and gbk format using &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Examples are given above.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1748</id>
		<title>JBrowse</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=JBrowse&amp;diff=1748"/>
		<updated>2016-12-16T12:08:54Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Install JBrowse */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Typical commands used to set up a JBrowse === &lt;br /&gt;
&lt;br /&gt;
Author: Martijn Derks&lt;br /&gt;
&lt;br /&gt;
* JBrowse is available for multiple species:&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/pig/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/chicken/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/cattle/&lt;br /&gt;
** https://jbrowse.hpcagrogenomics.wur.nl/turkey/&lt;br /&gt;
* Users are free to add usefull commands to this tutorial&lt;br /&gt;
&lt;br /&gt;
=== Install JBrowse ===&lt;br /&gt;
&lt;br /&gt;
Download the latest JBrowse here: http://jbrowse.org/&lt;br /&gt;
&lt;br /&gt;
Run the setup script to install perl dependencies and required modules&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
unzip JBrowse-1.12.0.zip&lt;br /&gt;
cd JBrowse-1.12.0&lt;br /&gt;
./setup.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Add reference sequence ===&lt;br /&gt;
&lt;br /&gt;
Example code for chicken genome&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/prepare-refseqs.pl --fasta /lustre/nobackup/WUR/ABGC/shared/public_data_store/genomes/chicken/Ensembl74/Gallus_gallus.Galgal4.74.dna.toplevel.fa&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To remove tracks use following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/remove-track.pl -D --trackLabel &#039;trackname&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Add annotation files (GFF/BED)===&lt;br /&gt;
&lt;br /&gt;
Data can be downloaded from the Ensembl FTP site: http://www.ensembl.org/info/data/ftp/index.html&lt;br /&gt;
&lt;br /&gt;
Add gene features:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Gene spans&amp;quot; --className feature5 --type gene --noSubfeatures --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --trackLabel Genes --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Add corresponding transcripts:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl --key &amp;quot;Transcripts&amp;quot; --className transcript --subfeatureClasses &#039;{&amp;quot;exon&amp;quot;: &amp;quot;exon&amp;quot;, &amp;quot;CDS&amp;quot;: &amp;quot;CDS&amp;quot;, &amp;quot;five_prime_UTR&amp;quot;: &amp;quot;five_prime_UTR&amp;quot;, &amp;quot;three_prime_UTR&amp;quot;: &amp;quot;three_prime_UTR&amp;quot;}&#039; --config &#039;{ &amp;quot;category&amp;quot;: &amp;quot;GalGal4.83 Annotation&amp;quot; }&#039; --type transcript --trackLabel Transcripts --gff ../ensembl_data/Gallus_gallus.Galgal4.83.gff3&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Alignment tracks (BAM)===&lt;br /&gt;
&lt;br /&gt;
You can load single BAM-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bam in /&amp;lt;dir&amp;gt;*.bam; do&lt;br /&gt;
        ln -s $bam track_symlinks/ ## Make symlinks from the BAM files&lt;br /&gt;
        ln -s $bam.bai track_symlinks/ ## Make symlinks to the BAM index files&lt;br /&gt;
        tissue=`echo $bam | rev | cut -c 5- | cut -d&#039;/&#039; -f1 | rev` ## USe the name of the file without .bam as trackLabel&lt;br /&gt;
        &lt;br /&gt;
        ## Add BAM in alignment mode (Alignments2)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_alignment&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;Alignments2&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
        ## Add BAM in coverage mode (SNPCoverage)&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_coverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/BAM&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;SNPCoverage&amp;quot;&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Variant tracks (VCF)===&lt;br /&gt;
&lt;br /&gt;
To load a VCF file in JBrowse make sure the file is gzipped and indexed&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
tabix -p vcf Gallus_gallus_incl_consequences.vcf.gz&lt;br /&gt;
&lt;br /&gt;
echo &#039; {&lt;br /&gt;
       &amp;quot;label&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;key&amp;quot; : &amp;quot;Gallus_gallus_incl_consequences&amp;quot;,&lt;br /&gt;
       &amp;quot;storeClass&amp;quot; : &amp;quot;JBrowse/Store/SeqFeature/VCFTabix&amp;quot;,&lt;br /&gt;
       &amp;quot;urlTemplate&amp;quot; : &amp;quot;../../ensembl_data/VCF/Gallus_gallus_incl_consequences.vcf.gz&amp;quot;,&lt;br /&gt;
       &amp;quot;category&amp;quot; : &amp;quot;2. Variants&amp;quot;,&lt;br /&gt;
       &amp;quot;type&amp;quot; : &amp;quot;HTMLVariants&amp;quot;&lt;br /&gt;
     } &#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Wiggle/BigWig tracks (WIG)===&lt;br /&gt;
&lt;br /&gt;
You can load single BigWig-files by following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/add-bam-track --label &amp;lt;label&amp;gt; --bam_url &amp;lt;url&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load multiple BAM files present in a certain directory use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for bw in /lustre/nobackup/WUR/ABGC/shared/Chicken/Ensembl_Rna_Seq/*.bw; do&lt;br /&gt;
        ln -s $bw track_symlinks/&lt;br /&gt;
        tissue=`echo $bw | rev | cut -c 8- | cut -d&#039;/&#039; -f1 | rev`&lt;br /&gt;
        echo &#039;{&lt;br /&gt;
                &amp;quot;label&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;key&amp;quot; : &amp;quot;&#039;${tissue}&#039;_BWcoverage&amp;quot;,&lt;br /&gt;
                &amp;quot;storeClass&amp;quot; : &amp;quot;BigWig&amp;quot;,&lt;br /&gt;
                &amp;quot;urlTemplate&amp;quot; : &amp;quot;../track_symlinks/&#039;${tissue}.bam.bw&#039;&amp;quot;,&lt;br /&gt;
                &amp;quot;category&amp;quot; : &amp;quot;3. RNA-seq alignments&amp;quot;,&lt;br /&gt;
                &amp;quot;type&amp;quot; : &amp;quot;JBrowse/View/Track/Wiggle/XYPlot&amp;quot;,&lt;br /&gt;
                &amp;quot;variance_band&amp;quot; : true&lt;br /&gt;
        }&#039; | bin/add-track-json.pl data/trackList.json&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure the BAM file can be read by a everybody if not use:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +r &amp;lt;BAM_file&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make sure that all directoryies in the full path of the BAMfile are executable:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod +x &amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evidence tracks===&lt;br /&gt;
&lt;br /&gt;
Evidence tracks can be loaded in bed, gff and gbk format using &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
bin/flatfile-to-json.pl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Examples are given above.&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1699</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1699"/>
		<updated>2016-01-22T08:58:28Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that like with any command you can make an alias in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. Just add something like this line to your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
alias p35myproj=&#039;source ~/my_envs/p35_myproj/bin/activate&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv Python docs on virtenv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1698</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1698"/>
		<updated>2016-01-22T08:57:44Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* activating a virtual environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that like with any command you can make an alias in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. Just add something like this line to your &amp;lt;code&amp;gt;.bashrc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
alias p35myproj=&#039;source ~/my_envs/p35_myproj/bin/activate&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv Python docs on virtenv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1697</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1697"/>
		<updated>2016-01-22T08:56:28Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* activating a virtual environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that like with any command you can make an alias in your ~/.bashrc. Just add something like this line to your .bashrc:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
alias p35myproj=&#039;source ~/my_envs/p35_myproj/bin/activate&#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv Python docs on virtenv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1696</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1696"/>
		<updated>2016-01-22T08:51:42Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Installation of software by users */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1695</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1695"/>
		<updated>2016-01-22T08:50:25Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv Python docs on virtenv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1694</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1694"/>
		<updated>2016-01-22T08:50:13Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv | Python docs on virtenv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1693</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1693"/>
		<updated>2016-01-22T08:49:47Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;$HOME&amp;lt;/code&amp;gt; dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;p35_myproj&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython can simply be installed through pip.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://docs.python.org/3/library/venv.html#module-venv]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1692</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1692"/>
		<updated>2016-01-22T08:39:56Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your &amp;lt;code&amp;gt;)$HOME&amp;lt;/code&amp;gt;) dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example &amp;lt;code&amp;gt;)p35_myproj&amp;lt;/code&amp;gt;). &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using &amp;lt;code&amp;gt;)pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual (note: only relevant for modules that can&#039;t be pulled through &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt;).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1691</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1691"/>
		<updated>2016-01-22T08:37:16Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your $HOME dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example p35_myproj. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using pip.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
  Collecting numpy&lt;br /&gt;
    Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
  Installing collected packages: numpy&lt;br /&gt;
    Running setup.py install for numpy ... done&lt;br /&gt;
  Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1690</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1690"/>
		<updated>2016-01-22T08:36:20Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your $HOME dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example p35_myproj. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (p35_myproj)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. An easy way of installing modules is using pip.&lt;br /&gt;
&lt;br /&gt;
Before you start installing modules, first update pip itself:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
you can then install other modules as you like, for instance numpy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
(p35_myproj) [user@nfs01 ~]$ pip install numpy&lt;br /&gt;
Collecting numpy&lt;br /&gt;
  Using cached numpy-1.10.4.tar.gz&lt;br /&gt;
Installing collected packages: numpy&lt;br /&gt;
  Running setup.py install for numpy ... done&lt;br /&gt;
Successfully installed numpy-1.10.4&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1689</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1689"/>
		<updated>2016-01-22T08:29:58Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your $HOME dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example p35_myproj. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source ~/my_envs/p35_myproj/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (newenv)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;easy_install&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
easy_install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1688</id>
		<title>Virtual environment Python 3.4 or higher</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Virtual_environment_Python_3.4_or_higher&amp;diff=1688"/>
		<updated>2016-01-22T08:07:28Z</updated>

		<summary type="html">&lt;p&gt;Megen002: Created page with &amp;quot;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
&lt;br /&gt;
If you do not already have a directory in your $HOME dir where your virtual environments live, first make one (it is assumed that you will over the course of time create several virtual environments for different projects and different versions of Python side-by-side, best to organise them a bit).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
mkdir ~/my_envs&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, load either Python 3.4 or 3.5 module (Python 3.3.3 should also work):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/3.5.0&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
And then simply create an environment with a reasonably descriptive name (remember, you may accumulate as many as you desire), in this example p35_myproj. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pyvenv install ~/my_envs/p35_myproj&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Should virtualenv not be installed, the virtualenv script can be downloaded and accessed directly:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.tar.gz&lt;br /&gt;
tar -xzvf virtualenv-1.9.tar.gz &lt;br /&gt;
python3 virtualenv-1.9/virtualenv.py testenv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the new environment is created, one will see a message similar to this:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  New python executable in newenv/bin/python3&lt;br /&gt;
  Also creating executable in newenv/bin/python&lt;br /&gt;
  Installing Setuptools.........................................................................done.&lt;br /&gt;
  Installing Pip................................................................................done.&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source newenv/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (newenv)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;easy_install&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
easy_install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=1687</id>
		<title>Setting up Python virtualenv</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=1687"/>
		<updated>2016-01-22T07:57:44Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
NOTE: as of Python 3.3 virtual environment support is built-in. See this page for an [[virtual_environment_Python_3.4_or_higher | alternative set-up of your virtual environment if using Python 3.4 or higher]].&lt;br /&gt;
&lt;br /&gt;
== creating a new virtual environment ==&lt;br /&gt;
It is assumed that the appropriate &amp;lt;code&amp;gt;virtualenv&amp;lt;/code&amp;gt; executable for the Python version of choice is installed. A new virtual environment, in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt; is created like so:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
virtualenv newenv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should virtualenv not be installed, the virtualenv script can be downloaded and accessed directly:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-1.9.tar.gz&lt;br /&gt;
tar -xzvf virtualenv-1.9.tar.gz &lt;br /&gt;
python3 virtualenv-1.9/virtualenv.py testenv&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the new environment is created, one will see a message similar to this:&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  New python executable in newenv/bin/python3&lt;br /&gt;
  Also creating executable in newenv/bin/python&lt;br /&gt;
  Installing Setuptools.........................................................................done.&lt;br /&gt;
  Installing Pip................................................................................done.&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source newenv/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
  (newenv)user@host:~$&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;easy_install&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
easy_install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1630</id>
		<title>List of users</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1630"/>
		<updated>2015-12-10T13:42:18Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Alumni */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Active users ==&lt;br /&gt;
&lt;br /&gt;
List of users. Alphabetical order (last/family name). Provide some background information by adding information to the &#039;User:username&#039; page.&lt;br /&gt;
&lt;br /&gt;
* [[User:Barris01 | Wes Barris (Cobb)]]&lt;br /&gt;
* [[User:Basti015 | John Bastiaansen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Binsb003 | Rianne van Binsbergen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bosse014 | Mirte Bosse (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bouwm024 | Aniek Bouwman (WUR/ABGC)]]&lt;br /&gt;
* [[User:Brasc001 | Pim Brascamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Calus001 | Mario Calus (WUR/ABGC)]]&lt;br /&gt;
* [[User:dongen01 | Henk van Dongen (TOPIGS)]]&lt;br /&gt;
* [[User:Frant001 | Laurent Frantz (WUR/ABGC)]]&lt;br /&gt;
* [[User:Frans004 | Wietse Franssen (WUR/ESG)]]&lt;br /&gt;
* [[User:Haars001 | Jan van Haarst (WUR/PRI)]]&lt;br /&gt;
* [[User:Hulse002 | Ina Hulsegge (WUR/ABGC)]] &lt;br /&gt;
* [[User:Hulze001 | Alex Hulzebosch (WUR/ABGC)]]&lt;br /&gt;
* [[User:Lopes01 | Marcos Soares Lopes (TOPIGS)]]&lt;br /&gt;
* [[User:Madse001 | Ole Madsen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Megen002 | Hendrik-Jan Megens (WUR/ABGC)]]&lt;br /&gt;
* [[User:Nijve002 | Harm Nijveen (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Schroo01 | Chris Schrooten (CRV)]]&lt;br /&gt;
* [[User:Schur010 | Anouk Schurink (WUR/ABGC)]]&lt;br /&gt;
* [[User:Smit089 | Sandra Smit (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Vande018 | Jeremie Vandenplas (WUR/ABGC)]]&lt;br /&gt;
* [[User:Veerk001 | Roel Veerkamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Vereij01 | Addie Vereijken (Hendrix Genetics)]]&lt;br /&gt;
&lt;br /&gt;
== FB-ICT Management of the HPC == &lt;br /&gt;
&lt;br /&gt;
* [[User:Dawes001 | Gwen Dawes (WUR, FB-IT) - HPC System Adiministrator]]&lt;br /&gt;
* [[User:Janss115 | Stephen Janssen (WUR, FB-IT, Service Management)]]&lt;br /&gt;
* [[User:pollm001 | Koen Pollmann (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
&lt;br /&gt;
== Alumni ==&lt;br /&gt;
* [[User:Bohme001 | Andre ten Böhmer (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
* [[User:Herrer01 | Juanma Herrero (WUR/ABGC)]]&lt;br /&gt;
* [[User:paude004 | Yogesh Paudel (WUR/ABGC)]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1629</id>
		<title>List of users</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=List_of_users&amp;diff=1629"/>
		<updated>2015-12-10T13:42:01Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Active users */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Active users ==&lt;br /&gt;
&lt;br /&gt;
List of users. Alphabetical order (last/family name). Provide some background information by adding information to the &#039;User:username&#039; page.&lt;br /&gt;
&lt;br /&gt;
* [[User:Barris01 | Wes Barris (Cobb)]]&lt;br /&gt;
* [[User:Basti015 | John Bastiaansen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Binsb003 | Rianne van Binsbergen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bosse014 | Mirte Bosse (WUR/ABGC)]]&lt;br /&gt;
* [[User:Bouwm024 | Aniek Bouwman (WUR/ABGC)]]&lt;br /&gt;
* [[User:Brasc001 | Pim Brascamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Calus001 | Mario Calus (WUR/ABGC)]]&lt;br /&gt;
* [[User:dongen01 | Henk van Dongen (TOPIGS)]]&lt;br /&gt;
* [[User:Frant001 | Laurent Frantz (WUR/ABGC)]]&lt;br /&gt;
* [[User:Frans004 | Wietse Franssen (WUR/ESG)]]&lt;br /&gt;
* [[User:Haars001 | Jan van Haarst (WUR/PRI)]]&lt;br /&gt;
* [[User:Hulse002 | Ina Hulsegge (WUR/ABGC)]] &lt;br /&gt;
* [[User:Hulze001 | Alex Hulzebosch (WUR/ABGC)]]&lt;br /&gt;
* [[User:Lopes01 | Marcos Soares Lopes (TOPIGS)]]&lt;br /&gt;
* [[User:Madse001 | Ole Madsen (WUR/ABGC)]]&lt;br /&gt;
* [[User:Megen002 | Hendrik-Jan Megens (WUR/ABGC)]]&lt;br /&gt;
* [[User:Nijve002 | Harm Nijveen (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Schroo01 | Chris Schrooten (CRV)]]&lt;br /&gt;
* [[User:Schur010 | Anouk Schurink (WUR/ABGC)]]&lt;br /&gt;
* [[User:Smit089 | Sandra Smit (WUR/Bioinformatics)]]&lt;br /&gt;
* [[User:Vande018 | Jeremie Vandenplas (WUR/ABGC)]]&lt;br /&gt;
* [[User:Veerk001 | Roel Veerkamp (WUR/ABGC)]]&lt;br /&gt;
* [[User:Vereij01 | Addie Vereijken (Hendrix Genetics)]]&lt;br /&gt;
&lt;br /&gt;
== FB-ICT Management of the HPC == &lt;br /&gt;
&lt;br /&gt;
* [[User:Dawes001 | Gwen Dawes (WUR, FB-IT) - HPC System Adiministrator]]&lt;br /&gt;
* [[User:Janss115 | Stephen Janssen (WUR, FB-IT, Service Management)]]&lt;br /&gt;
* [[User:pollm001 | Koen Pollmann (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
&lt;br /&gt;
== Alumni ==&lt;br /&gt;
* [[User:Bohme001 | Andre ten Böhmer (WUR, FB-IT, Infrastructure)]]&lt;br /&gt;
* [[User:paude004 | Yogesh Paudel (WUR/ABGC)]]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1592</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1592"/>
		<updated>2015-03-26T22:33:46Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Access Policy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1591</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1591"/>
		<updated>2015-03-26T22:26:53Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Access Policy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of the AgHPC is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1590</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1590"/>
		<updated>2015-03-26T22:24:25Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Access Policy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1589</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1589"/>
		<updated>2015-03-26T22:08:45Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Example of local commands to execute */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab@WUR: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine, e.g.:&lt;br /&gt;
mkdir ~/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: &lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2: make private-public rsa key pair&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd ~/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# IMPORTANT: make sure you use the *public* key, not the private part of the key pair!&lt;br /&gt;
# Go back to the terminal:&lt;br /&gt;
cd ~/Git_Stuff&lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in ~/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cp myscript.sh ~/Git_Stuff/Turkey_Association&lt;br /&gt;
cd ~/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;br /&gt;
*[https://github.com GitHub, a public Git repository (not to be confused with GitLab@WUR, but uses the same Git versioning software)]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1588</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1588"/>
		<updated>2015-03-26T22:03:56Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab@WUR: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# IMPORTANT: make sure you use the *public* key, not the private part of the key pair!&lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;br /&gt;
*[https://github.com GitHub, a public Git repository (not to be confused with GitLab@WUR, but uses the same Git versioning software)]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1587</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1587"/>
		<updated>2015-03-26T20:42:21Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* Miscellaneous */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access policy is still a work in progress. In principle, all staff and students of the five main partners will have access to the cluster. Access needs to be granted actively (by creation of an account on the cluster by FB-IT reagdring non WUR accounts). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1586</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1586"/>
		<updated>2015-03-26T20:27:53Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab@WUR: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;br /&gt;
*[https://github.com GitHub, a public Git repository (not to be confused with GitLab@WUR, but uses the same Git versioning software)]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1585</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1585"/>
		<updated>2015-03-26T20:27:27Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;br /&gt;
*[https://github.com GitHub, a public Git repository (not to be confused with GitLab@WUR, but uses the same Git versioning software)]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1584</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1584"/>
		<updated>2015-03-26T20:27:08Z</updated>

		<summary type="html">&lt;p&gt;Megen002: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;br /&gt;
[https://github.com GitHub, a public Git repository (not to be confused with GitLab@WUR, but uses the same Git versioning software)]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Convert_between_MediaWiki_and_other_formats&amp;diff=1583</id>
		<title>Convert between MediaWiki and other formats</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Convert_between_MediaWiki_and_other_formats&amp;diff=1583"/>
		<updated>2015-03-26T20:24:17Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are various programs that can convert to and from MediaWiki format, as used in this Wiki, and other formats. This page lists some of them and provides explanation of how to use them.&lt;br /&gt;
&lt;br /&gt;
== Pandoc ==&lt;br /&gt;
&lt;br /&gt;
[http://johnmacfarlane.net/pandoc/demos.html Pandoc] is an open source tool that can convert between different markup styles and languages.&lt;br /&gt;
&lt;br /&gt;
There is a [http://johnmacfarlane.net/pandoc/try/ web version of Pandoc] but that seems limited to small documents only. There&#039;s also a CLI version, and if you are running a Linux desktop machine chances are you can install it from the repository. E.g. on Ubuntu 14.04:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
sudo apt-get install pandoc&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following example translates from MediaWiki format to Markdown format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=bash&amp;gt;&lt;br /&gt;
pandoc -f mediawiki -t markdown -s myfile.mw -o myfile.md&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Markup_language Wikipedia entry on markup languages]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1582</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1582"/>
		<updated>2015-03-26T20:23:02Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1581</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1581"/>
		<updated>2015-03-26T20:21:40Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example of local commands to execute ==&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1580</id>
		<title>Manual GitLab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Manual_GitLab&amp;diff=1580"/>
		<updated>2015-03-26T20:20:14Z</updated>

		<summary type="html">&lt;p&gt;Megen002: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Manual GitLab: Create projects and add files&lt;br /&gt;
&lt;br /&gt;
== Signing up ==&lt;br /&gt;
If you haven&#039;t done so already, first sign up at GitLab@WUR:&lt;br /&gt;
  https://git.wageningenur.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
## This shows an example, step by step, to create a project and add files into that project.&lt;br /&gt;
&lt;br /&gt;
## Configuration:&lt;br /&gt;
&lt;br /&gt;
# 1. Create a folder in your machine:&lt;br /&gt;
/home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
# 2. configuration step 1: Copy into that folder the commands from the git-web: https://git.wageningenur.nl/  # and then ABGC_Genomics/&lt;br /&gt;
&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
git config --global user.name &amp;quot;Herrero Medrano, Juan&amp;quot;&lt;br /&gt;
git config --global user.email &amp;quot;juan.herreromedrano@wur.nl&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# 3. configuration step 2:&lt;br /&gt;
cd /home/juanma/Git_Stuff&lt;br /&gt;
ssh-keygen -t rsa&lt;br /&gt;
cd /home/juanma/.ssh/&lt;br /&gt;
cat id_rsa.pub&lt;br /&gt;
# Copy the code and go to the git-web: Profile settings -&amp;gt; SSH Keys. Paste the code and add key. &lt;br /&gt;
# Go back to the terminal: &lt;br /&gt;
git clone git@git.wageningenur.nl:ABGC_Genomics/Turkey_Association.git&lt;br /&gt;
# Once I have made this connection, my project will appear as a folder in /home/juanma/Git_Stuff&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Add files:&lt;br /&gt;
# Add scripts to the project &amp;quot;Turkey_Association&amp;quot; :&lt;br /&gt;
cd /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
cp myscript.sh /home/juanma/Git_Stuff/Turkey_Association&lt;br /&gt;
git add myscript.sh&lt;br /&gt;
git commit -m &amp;quot;myfirst_commit&amp;quot;&lt;br /&gt;
git push origin master&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Bioinformatics_protocols_ABG_Chairgroup | Bioinformatics tips, tricks, workflows at ABGC]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
[http://en.wikipedia.org/wiki/Git_(software) Wikipedia entry on Git]&lt;/div&gt;</summary>
		<author><name>Megen002</name></author>
	</entry>
</feed>