Main Page: Difference between revisions
(Added link to the new Parallel_R_code_on_SLURM page) |
No edit summary |
||
Line 3: | Line 3: | ||
The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). | The Agrogenomics HPC was an initiative of the [http://www.breed4food.com/en/breed4food.htm Breed4Food] (B4F) consortium, consisting of the [[About_ABGC | Animal Breeding and Genomics Centre]] (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: [http://www.cobb-vantress.com Cobb-Vantress], [https://www.crv4all.nl CRV], [http://www.hendrix-genetics.com Hendrix Genetics], and [http://www.topigs.com TOPIGS]. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan). | ||
= Using the HPC-Ag = | |||
== Gaining access to the HPC-Ag == | |||
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP]. | |||
Access to the cluster and file transfer are done | |||
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]] | * [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]] | ||
* [[Services | | * [[Services | Alternative access methods, and extra features and services on the HPC]] | ||
* [[Filesystems | Accessible storage methods on the HPC]] | * [[Filesystems | Accessible storage methods on the HPC]] | ||
Line 20: | Line 15: | ||
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues ('partitions') granted to a user, priority to the system's resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT. | Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues ('partitions') granted to a user, priority to the system's resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT. | ||
=== Cluster Management Software and Scheduler | = Events = | ||
* [[Courses]] that have happened and are happening | |||
* [[Downtime]] that will affect all users | |||
* [[Meetings]] that may affect the policies of the HPC | |||
= Other Software = | |||
== Cluster Management Software and Scheduler == | |||
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler. | The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler. | ||
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]] | * [[BCM_on_B4F_cluster | Monitor cluster status with BCM]] | ||
Line 27: | Line 29: | ||
* [[SLURM_Compare | Rosetta Stone of Workload Managers]] | * [[SLURM_Compare | Rosetta Stone of Workload Managers]] | ||
== Installation of software by users == | |||
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]] | * [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]] | ||
Line 35: | Line 37: | ||
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]] | * [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]] | ||
== Installed software == | |||
* [[Globally_installed_software | Globally installed software]] | * [[Globally_installed_software | Globally installed software]] | ||
* [[ABGC_modules | ABGC specific modules]] | * [[ABGC_modules | ABGC specific modules]] | ||
=== Being in control of Environment parameters | = Useful Notes = | ||
== Being in control of Environment parameters == | |||
* [[Using_environment_modules | Using environment modules]] | * [[Using_environment_modules | Using environment modules]] | ||
Line 48: | Line 52: | ||
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]] | * [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]] | ||
== Controlling costs == | |||
* [[SACCT | using SACCT to see your costs]] | * [[SACCT | using SACCT to see your costs]] | ||
Line 58: | Line 62: | ||
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:lith010 | Jan van Lith (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]]. | Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:lith010 | Jan van Lith (Wageningen UR,FB-IT, Infrastructure)]] and [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]]. | ||
= Miscellaneous = | |||
* [[Mailinglist | Electronic mail discussion lists]] | * [[Mailinglist | Electronic mail discussion lists]] | ||
* [[History_of_the_Cluster | Historical information on the startup of the HPC]] | * [[History_of_the_Cluster | Historical information on the startup of the HPC]] | ||
Line 66: | Line 70: | ||
* [[Manual GitLab | GitLab: Create projects and add scripts]] | * [[Manual GitLab | GitLab: Create projects and add scripts]] | ||
= See also = | |||
* [[Maintenance_and_Management | Maintenance and Management]] | * [[Maintenance_and_Management | Maintenance and Management]] | ||
* [[BCData | BCData]] | * [[BCData | BCData]] | ||
Line 74: | Line 78: | ||
* [[Lustre_PFS_layout | Lustre Parallel File System layout]] | * [[Lustre_PFS_layout | Lustre Parallel File System layout]] | ||
= External links = | |||
{| width="90%" | {| width="90%" | ||
|- valign="top" | |- valign="top" |
Revision as of 16:44, 12 July 2018
The Agrogenomics cluster is a High Performance Computer (HPC) infrastructure hosted by Wageningen University & Research Centre. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR.
The Agrogenomics HPC was an initiative of the Breed4Food (B4F) consortium, consisting of the Animal Breeding and Genomics Centre (WU-Animal Breeding and Genomics and Wageningen Livestock Research) and four major breeding companies: Cobb-Vantress, CRV, Hendrix Genetics, and TOPIGS. Currently, in addition to the original partners, the HPC (HPC-Ag) is used by other groups from Wageningen UR (Bioinformatics, Centre for Crop Systems Analysis, Environmental Sciences Group, and Plant Research International) and plant breeding industry (Rijk Zwaan).
Using the HPC-Ag
Gaining access to the HPC-Ag
Access to the cluster and file transfer are traditionally done via SSH and SFTP.
- Logging into cluster using ssh and file transfer
- Alternative access methods, and extra features and services on the HPC
- Accessible storage methods on the HPC
Access Policy
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues ('partitions') granted to a user, priority to the system's resources is regulated. Note that the use of the HPC-Ag is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from CAT-AGRO or FB-ICT.
Events
- Courses that have happened and are happening
- Downtime that will affect all users
- Meetings that may affect the policies of the HPC
Other Software
Cluster Management Software and Scheduler
The HPC-Ag uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.
- Monitor cluster status with BCM
- Submit jobs with Slurm
- Be aware of how much work the cluster is under right now with 'node_usage_graph'
- Rosetta Stone of Workload Managers
Installation of software by users
- Installing domain specific software: installation by users
- Setting local variables
- Installing R packages locally
- Setting up and using a virtual environment for Python3
- Setting up and using a virtual environment for Python3.4 or higher
Installed software
Useful Notes
Being in control of Environment parameters
- Using environment modules
- Setting local variables
- Set a custom temporary directory location
- Installing R packages locally
- Setting up and using a virtual environment for Python3
Controlling costs
Management
Project Leader of the HPC is Stephen Janssen (Wageningen UR,FB-IT, Service Management). Jan van Lith (Wageningen UR,FB-IT, Infrastructure) and Gwen Dawes (Wageningen UR, FB-IT, Infrastructure) are responsible for Maintenance and Management.
Miscellaneous
- Electronic mail discussion lists
- Historical information on the startup of the HPC
- Bioinformatics tips, tricks, and workflows
- Running parallel R code on SLURM
- Convert between MediaWiki format and other formats
- GitLab: Create projects and add scripts
See also
- Maintenance and Management
- BCData
- Electronic mail discussion lists
- About ABGC
- High Performance Computing @ABGC
- Lustre Parallel File System layout