<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lith010</id>
	<title>HPCwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Lith010"/>
	<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php/Special:Contributions/Lith010"/>
	<updated>2026-04-18T07:35:26Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Reservations&amp;diff=1842</id>
		<title>Reservations</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Reservations&amp;diff=1842"/>
		<updated>2017-08-25T12:52:05Z</updated>

		<summary type="html">&lt;p&gt;Lith010: /* Cost */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM is capable of assuring resources be available at a specified time for a specified set of users via the use of reservations. A reservation can be made far in advance of the time when the resources are needed, and SLURM will automatically prevent jobs from being scheduled that do not belong to that reservation during that time. In fact, the earlier the better - this allows long jobs to terminate properly.&lt;br /&gt;
&lt;br /&gt;
This may be required in extreme circumstances where resources must be available for demonstration or teaching purposes at the moment when the event is happening.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
In order to use a reservation, simply add the following line into your sbatch script:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#SBATCH --reservation=&amp;quot;MyReservation&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Replacing &#039;MyReservation&#039; with the name of the reservation you wish to use. To see current reservations:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;scontrol show reservations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
The currently valid reservations are marked as State=ACTIVE; When you submit your job using a reservation definition, SLURM is then asked if you can use these resources. If yes, then your job will proceed as soon as there are resources available inside the reserved allocation. If no, then your job will fail to submit.&lt;br /&gt;
&lt;br /&gt;
== Requesting a Reservation ==&lt;br /&gt;
There is no way to create a reservation yourself - an admin must create it. To request a reservation, please email your system administrator and define the list of users (not groups!) who may use this reservation. Given that this will dedicate the resources away from general use, there will be a financial cost involved in this, typically equivalent to the cost of running jobs for the entire period defined.&lt;br /&gt;
&lt;br /&gt;
In addition, it&#039;s not currently possible to reserve less than entire nodes at a time, as there&#039;s no means to reserve memory and cores separately (And there&#039;d be no guarantee that they&#039;d be on the same machine if so). Thus, the minimum reservation possible is one normal node, and the maximum should be no more than three in order to not disrupt the main workflow.&lt;br /&gt;
&lt;br /&gt;
== Costs ==&lt;br /&gt;
The costs will be 50 euro per node per day + the additional costs of the job itself.&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Reservations&amp;diff=1841</id>
		<title>Reservations</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Reservations&amp;diff=1841"/>
		<updated>2017-08-25T12:51:54Z</updated>

		<summary type="html">&lt;p&gt;Lith010: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SLURM is capable of assuring resources be available at a specified time for a specified set of users via the use of reservations. A reservation can be made far in advance of the time when the resources are needed, and SLURM will automatically prevent jobs from being scheduled that do not belong to that reservation during that time. In fact, the earlier the better - this allows long jobs to terminate properly.&lt;br /&gt;
&lt;br /&gt;
This may be required in extreme circumstances where resources must be available for demonstration or teaching purposes at the moment when the event is happening.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
In order to use a reservation, simply add the following line into your sbatch script:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;#SBATCH --reservation=&amp;quot;MyReservation&amp;quot;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
Replacing &#039;MyReservation&#039; with the name of the reservation you wish to use. To see current reservations:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;scontrol show reservations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
The currently valid reservations are marked as State=ACTIVE; When you submit your job using a reservation definition, SLURM is then asked if you can use these resources. If yes, then your job will proceed as soon as there are resources available inside the reserved allocation. If no, then your job will fail to submit.&lt;br /&gt;
&lt;br /&gt;
== Requesting a Reservation ==&lt;br /&gt;
There is no way to create a reservation yourself - an admin must create it. To request a reservation, please email your system administrator and define the list of users (not groups!) who may use this reservation. Given that this will dedicate the resources away from general use, there will be a financial cost involved in this, typically equivalent to the cost of running jobs for the entire period defined.&lt;br /&gt;
&lt;br /&gt;
In addition, it&#039;s not currently possible to reserve less than entire nodes at a time, as there&#039;s no means to reserve memory and cores separately (And there&#039;d be no guarantee that they&#039;d be on the same machine if so). Thus, the minimum reservation possible is one normal node, and the maximum should be no more than three in order to not disrupt the main workflow.&lt;br /&gt;
&lt;br /&gt;
== Cost ==&lt;br /&gt;
The cost will be 50 euro per node per day + the additional costs of the job itself.&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=User:Lith010&amp;diff=1812</id>
		<title>User:Lith010</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=User:Lith010&amp;diff=1812"/>
		<updated>2017-07-03T11:54:20Z</updated>

		<summary type="html">&lt;p&gt;Lith010: Created page with &amp;quot;[https://www.vcard.wur.nl/Views/Profile/View.aspx?id=56890 vCard information]&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://www.vcard.wur.nl/Views/Profile/View.aspx?id=56890 vCard information]&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1811</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1811"/>
		<updated>2017-07-03T11:06:29Z</updated>

		<summary type="html">&lt;p&gt;Lith010: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Events ==&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of the HPC&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Available features and services on the HPC]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:lith010 | Jan van Lith (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of the HPC]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=BCData&amp;diff=1746</id>
		<title>BCData</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=BCData&amp;diff=1746"/>
		<updated>2016-07-19T07:46:01Z</updated>

		<summary type="html">&lt;p&gt;Lith010: Created page with &amp;quot;This page contains information about BCData&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains information about BCData&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1745</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=1745"/>
		<updated>2016-07-19T07:45:19Z</updated>

		<summary type="html">&lt;p&gt;Lith010: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Agrogenomics cluster is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Compute] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). &lt;br /&gt;
&lt;br /&gt;
== Rationale and Requirements for a new cluster ==&lt;br /&gt;
[[File:Breed4food-logo.jpg|thumb|right|200px|The Breed4Food logo]]&lt;br /&gt;
The Agrogenomics Cluster was originally conceived as being the 7th pillar of the [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]. While the other six pillars revolve around specific research themes, the Cluster represents a joint infrastructure. The rationale behind the cluster is to enable the increasing computational needs in the field of genetics and genomics research, by creating a joint facility that will generate benefits of scale, thereby reducing cost. In addition, the joint infrastructure is intended to facilitate cross-organisational knowledge transfer. In that capacity, the HPC-Ag acts as a joint (virtual) laboratory where researchers - academic and applied - can benefit from each other&#039;s know-how. Lastly, the joint cluster, housed at Wageningen University campus, allows retaining vital and often confidential data sources in a controlled environment, something that cloud services such as Amazon Cloud or others usually can not guarantee.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Process of acquisition and financing ==&lt;br /&gt;
&lt;br /&gt;
[[File:Signing_CatAgro.png|thumb|left|300px|Petra Caessens, manager operations of CAT-AgroFood, signs the contract of the supplier on August 1st, 2013. Next to her Johan van Arendonk on behalf of Breed4Food.]]&lt;br /&gt;
The Agrogenomics cluster was financed by [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/News-and-agenda/Show/CATAgroFood-invests-in-a-High-Performance-Computing-cluster.htm CATAgroFood]. The [[B4F_cluster#IT_Workgroup | IT-Workgroup]] formulated a set of requirements that in the end were best met by an offer from [http://www.dell.com/learn/nl/nl/rc1078544/hpcc Dell]. [http://www.clustervision.com ClusterVision] was responsible for installing the cluster at the Theia server centre of FB-ICT.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Architecture of the cluster ==&lt;br /&gt;
[[Architecture_of_the_HPC | Main Article: Architecture of the Agrogenomics HPC]]&lt;br /&gt;
[[File:Cluster_scheme.png|thumb|right|600px|Schematic overview of the cluster.]]&lt;br /&gt;
The new Agrogenomics HPC has a classic cluster architecture: state of the art Parallel File System (PSF), headnodes, compute nodes (of varying &#039;size&#039;), all connected by superfast network connections (Infiniband). Implementation of the cluster will be done in stages. The initial stage includes a 600TB PFS, 48 slim nodes of 16 cores and 64GB RAM each, and 2 fat nodes of 64 cores and 1TB RAM each. The overall architecture, that include two head nodes in fall-over configuration and an infiniband network backbone, can be easily expanded by adding nodes and expanding the PFS. The cluster management software is designed to facilitate a heterogenous and evolving cluster.&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Housing at Theia ==&lt;br /&gt;
[[File:Map_Theia.png|thumb|left|200px|Location of Theia, just outside of Wageningen campus]]&lt;br /&gt;
The Agrogenomics Cluster is housed at one of two main server centres of WUR-FB-IT, near Wageningen Campus. The building (Theia)  may not look like much from the outside (used to function as potato storage) but inside is a modern server centre that includes, a.o., emergency power backup systems and automated fire extinguishers. Many of the server facilities provided by FB-ICT that are used on a daily basis by WUR personnel and students are located there, as is the Agrogenomics Cluster. Access to Theia is evidently highly restricted and can only be granted in the presence of a representative of FB-IT.&lt;br /&gt;
{{-}}&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;10%&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
[[File:Cluster2_pic.png|thumb|left|220px|Some components of the cluster after unpacking.]]&lt;br /&gt;
| width=&amp;quot;70%&amp;quot; |&lt;br /&gt;
[[File:Cluster_pic.png|thumb|right|400px|The final configuration after installation.]]&lt;br /&gt;
|}&lt;br /&gt;
{{-}}&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
[[HPC_management | Main Article: HPC management]]&lt;br /&gt;
&lt;br /&gt;
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:pollm001 | Koen Pollmann (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]].&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.&lt;br /&gt;
&lt;br /&gt;
== Users ==&lt;br /&gt;
&lt;br /&gt;
* [[List_of_users | List of users (alphabetical order)]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
&lt;br /&gt;
== Using the HPC-Ag ==&lt;br /&gt;
=== Gaining access to the HPC-Ag ===&lt;br /&gt;
Access to the cluster and file transfer are done by [http://en.wikipedia.org/wiki/Secure_Shell ssh-based protocols].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Cluster Management Software and Scheduler ===&lt;br /&gt;
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
=== Installation of software by users ===&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
&lt;br /&gt;
=== Installed software ===&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
=== Being in control of Environment parameters ===&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
=== Controlling costs ===&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Lith010</name></author>
	</entry>
</feed>