Main Page: Difference between revisions
Jump to navigation
Jump to search
(18 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [ | Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [https://www.wur.nl/en/show/supercomputer-anunna-opens-up-more-opportunities-for-data-storage-and-artificial-intelligence-applications.htm Wageningen University & Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. | ||
== Access Policy == | |||
[[Access_Policy | Main Article: Access Policy]] | |||
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Note that the use of Anunna is not free of charge. | |||
= Using Anunna = | = Using Anunna = | ||
Line 9: | Line 14: | ||
* [[file_transfer | File transfer options]] | * [[file_transfer | File transfer options]] | ||
* [[Services | Alternative access methods, and extra features and services on Anunna]] | * [[Services | Alternative access methods, and extra features and services on Anunna]] | ||
* [[Filesystems | | * [[Filesystems | Data storage methods on Anunna]] | ||
== | == Using Anunna for courses (mainly jupyter notebooks) == | ||
[[ | * [[steps_for_courses | Steps involved to run a course on Anunna]] | ||
= Events = | |||
* [[Courses]] that have happened and are happening | * [[Courses]] that have happened and are happening | ||
* [[Downtime]] that will affect all users | * [[Downtime]] that will affect all users | ||
* [[Meetings]] that may affect the policies of Anunna | * [[Meetings]] that may affect the policies of Anunna | ||
= Software = | |||
* [[Modules]] | |||
* [[Apptainer]] | |||
* [[Python]] | |||
* [[R]] | |||
* [[Julia]] | |||
=Web Apps= | |||
*[[Jupyter]] | |||
= Other Software = | = Other Software = | ||
== Cluster | == Cluster Scheduler == | ||
Anunna uses | Anunna uses Slurm as job scheduler. | ||
* [[Using_Slurm | Submit jobs with Slurm]] | * [[Using_Slurm | Submit jobs with Slurm]] | ||
* [[node_usage_graph | Be aware of how much work the cluster is under right now with 'node_usage_graph']] | * [[node_usage_graph | Be aware of how much work the cluster is under right now with 'node_usage_graph']] | ||
Line 50: | Line 64: | ||
* [[Using_environment_modules | Using environment modules]] | * [[Using_environment_modules | Using environment modules]] | ||
* [[Aliases and local variables]] | |||
* [[Setting local variables]] | * [[Setting local variables]] | ||
* [[Setting_TMPDIR | Set a custom temporary directory location]] | * [[Setting_TMPDIR | Set a custom temporary directory location]] | ||
Line 61: | Line 76: | ||
== Management == | == Management == | ||
Product Owner of Anunna is Alexander van Ittersum (Wageningen UR,FB-IT, C&PS). [[User: prins089 | Fons Prinsen (Wageningen UR, FB-IT, C&PS)]] is responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster. | |||
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]] | * [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]] | ||
Line 74: | Line 89: | ||
* [[Monitoring_executions | Monitoring job execution]] | * [[Monitoring_executions | Monitoring job execution]] | ||
* [[Shared_folders | Working with shared folders in the Lustre file system]] | * [[Shared_folders | Working with shared folders in the Lustre file system]] | ||
* [[Old_binaries | Running older binaries on the updated OS]] | |||
= See also = | = See also = |
Latest revision as of 16:19, 2 December 2024
Anunna is a High Performance Computer (HPC) infrastructure hosted by Wageningen University & Research Centre. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR.
Access Policy
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Note that the use of Anunna is not free of charge.
Using Anunna
Gaining access to Anunna
Access to the cluster and file transfer are traditionally done via SSH and SFTP.
- Logging into cluster using ssh
- File transfer options
- Alternative access methods, and extra features and services on Anunna
- Data storage methods on Anunna
Using Anunna for courses (mainly jupyter notebooks)
Events
- Courses that have happened and are happening
- Downtime that will affect all users
- Meetings that may affect the policies of Anunna
Software
Web Apps
Other Software
Cluster Scheduler
Anunna uses Slurm as job scheduler.
- Submit jobs with Slurm
- Be aware of how much work the cluster is under right now with 'node_usage_graph'
- Rosetta Stone of Workload Managers
Installation of software by users
- Installing domain specific software: installation by users
- Setting local variables
- Installing R packages locally
- Setting up and using a virtual environment for Python3
- Setting up and using a virtual environment for Python3.4 or higher
- Installing WRF and WPS
- Running scripts on a fixed timeschedule (cron)
Installed software
Useful Notes
Being in control of Environment parameters
- Using environment modules
- Aliases and local variables
- Setting local variables
- Set a custom temporary directory location
- Installing R packages locally
- Setting up and using a virtual environment for Python3
Controlling costs
Management
Product Owner of Anunna is Alexander van Ittersum (Wageningen UR,FB-IT, C&PS). Fons Prinsen (Wageningen UR, FB-IT, C&PS) is responsible for Maintenance and Management of the cluster.
Miscellaneous
- Electronic mail discussion lists
- Historical information on the startup of Anunna
- Bioinformatics tips, tricks, and workflows
- Running parallel R code on SLURM
- Convert between MediaWiki format and other formats
- GitLab: Create projects and add scripts
- Monitoring job execution
- Working with shared folders in the Lustre file system
- Running older binaries on the updated OS
See also
- Maintenance and Management
- BCData
- Electronic mail discussion lists
- About ABGC
- High Performance Computing @ABGC
- Lustre Parallel File System layout