Main Page: Difference between revisions
m (Minor typo) |
(Added link to the new Parallel_R_code_on_SLURM page) |
||
Line 62: | Line 62: | ||
* [[History_of_the_Cluster | Historical information on the startup of the HPC]] | * [[History_of_the_Cluster | Historical information on the startup of the HPC]] | ||
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]] | * [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]] | ||
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]] | |||
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]] | * [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]] | ||
* [[Manual GitLab | GitLab: Create projects and add scripts]] | * [[Manual GitLab | GitLab: Create projects and add scripts]] |
Revision as of 18:48, 19 May 2018
The Agrogenomics cluster is a High Performance Computer (HPC) infrastructure hosted by Wageningen University & Research Centre. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR.
The Agrogenomics HPC was an initiative of the Breed4Food (B4F) consortium, consisting of the Animal Breeding and Genomics Centre (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: Cobb-Vantress, CRV, Hendrix Genetics, and TOPIGS. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan).
Events
- Courses that have happened and are happening
- Downtime that will affect all users
- Meetings that may affect the policies of the HPC
Using the HPC-Ag
Gaining access to the HPC-Ag
Access to the cluster and file transfer are done by ssh-based protocols.
- Logging into cluster using ssh and file transfer
- Available features and services on the HPC
- Accessible storage methods on the HPC
Access Policy
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues ('partitions') granted to a user, priority to the system's resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.
Cluster Management Software and Scheduler
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.
- Monitor cluster status with BCM
- Submit jobs with Slurm
- Be aware of how much work the cluster is under right now with 'node_usage_graph'
- Rosetta Stone of Workload Managers
Installation of software by users
- Installing domain specific software: installation by users
- Setting local variables
- Installing R packages locally
- Setting up and using a virtual environment for Python3
- Setting up and using a virtual environment for Python3.4 or higher
Installed software
Being in control of Environment parameters
- Using environment modules
- Setting local variables
- Set a custom temporary directory location
- Installing R packages locally
- Setting up and using a virtual environment for Python3
Controlling costs
Management
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). Jan van Lith (Wageningen UR,FB-IT, Infrastructure) and Gwen Dawes (Wageningen UR, FB-IT, Infrastructure) are responsible for Maintenance and Management.
Miscellaneous
- Electronic mail discussion lists
- Historical information on the startup of the HPC
- Bioinformatics tips, tricks, and workflows
- Running parallel R code on SLURM
- Convert between MediaWiki format and other formats
- GitLab: Create projects and add scripts
See also
- Maintenance and Management
- BCData
- Electronic mail discussion lists
- About ABGC
- High Performance Computing @ABGC
- Lustre Parallel File System layout