Filesystems: Difference between revisions

From HPCwiki
Jump to navigation Jump to search
Prins0891 (talk | contribs)
add 5T filesize limit for tape
 
(33 intermediate revisions by 7 users not shown)
Line 1: Line 1:
== Fast Scratch ==
Anunna currently has multiple filesystem mounts that are available cluster-wide:


On the Lustre PFS scratch space is organised per partner. Users can only create directory and files in the folders of the organisation they belong to. The Fast Scratch is meant for temporary files and folders. Files and folders should be removed once the jobs are finished. Files and folder older than one month will automatically be removed. Since the Fast Scratch is in an integrated part of the compute infrastructure, no additional cost is incurred based on use in either throughput or volume stored.  
== Global ==
* /home - This mount uses NFS to mount the home directories over the slower internal network from the active master. Each user has a 200G quota for this filesystem,it is regularly backed up to tape, and can reliably be restored from up to a week's history. Use this for programs and configuration files.


  /lustre/scratch/[partner]/[unit]
* /shared - This mount provides a consistent set of binaries and configuration files for the entire cluster.
 
* /lustre - This large and fast mount uses the Lustre parallel filesystem to provide files from multiple redundant servers over the fast Omnipath network. Access is provided per group, thus:
/lustre/[level]/[partner]/[unit]
e.g.
e.g.
  /lustre/scratch/WUR/ABGC/
/lustre/backup/WUR/ABGC/
It comprises of two major parts (and some minor):
* /lustre/'''nobackup''' - This is the 'normal' filesystem for Lustre - no backups, just stored on the filesystem. Without having a backup needed, the cost of data here is not as much as under /lustre/backup, but in case of disaster cannot be recovered.
* /lustre/'''backup''' - In case of disaster, this data is stored a second time on a separate machine. Whilst this backup is purely in case of complete tragedy (such as some immense filesystem error, or multiple component failure), it can potentially be used to revert mistakes if you are very fast about reporting them. There is however no guarantee of this service.
* /lustre/'''shared''' - Same as /lustre/backup, except publicly available. This is where truly shared data lives that isn't assigned to a specific group.
 
And additionally:
* /lustre/'''scratch''' - Files here may be removed after some time if the filesystem gets too full (Typically 30 days). You should tidy up this data yourself once work is complete.
 
=== Private shared directories ===
If you are working with a group of users on a similar project, you might consider making a [[Shared_folders|Shared directory]] to coordinate. Information on how to do so is in the linked article.
 
== Local ==
Specific to certain machines are some other filesystems that are available to you:
* '''/archive''' - an archive mount only accessible from the login nodes. The cost of storing data here is less than on Lustre, but it cannot be used for compute work. This location is only available to WUR users. Files are able to be reverted via backup, however this only comes in fortnightly (14 day) intervals.
 
* /tmp - On each worker node there is a /tmp mount that can be used for temporary local caching. Be advised that you should clean this up, lest your files become a hindrance to other users. You can request a node with free space in your sbatch script like so:
<pre>
#SBATCH --tmp=<required space>
</pre>
 
* /dev/shm - On each worker you may also create a virtual filesystem directly into memory, for extremely fast data access. Be advised that this will count against the memory used for your job, but it is also the fastest available filesystem if needed.
 
== iRods ==
On Anunna we host our own iRods instance.
 
With that you can push data to the WUR tape storage for archiving at very low cost.
 
More info on how to use it, please see https://irods.wur.nl/.
 
The best course of action is to loosely follow the course, using your own data, and use your personal space for data upload and transfer to tape.
 
Be sure to check whether the data is correctly stored on tape before you remove your data!


== Fast Protected ==
On Anunna there are some differences and additions to the above site:
Data that needs to remain on the Lustre PFS *and* needs to be backed up as well (i.e. requires redundancy in case the PFS experiences a fatal failure) can be placed in the Fast Protected area.


  /lustre/backup/[name partner]
* The zone is HPC
* With <code>iinit</code> you can init the irods env. Use your account password.
* With <code>ils</code> you can see your available irods collections. You need that as a destination location for <code>itape</code>
* We have a function to ease uploads (use -h for help) : <code>itape</code>
* We have aliases to ease checking of the status of your archive process. (it takes a while) : <code>itapestat</code> and <code>itapestatnp</code>, the first is for human use, is shows a paginated status of all your files. The latter dumps all the info, so you can e.g. use grep to filter.
* If you remove data with <code>irm</code> within iRODS, the data isn't actually removed but moved to a trashbin. The advantage is that you can retrieve it if the removal was in error, the disadvantage is that the data will keep costing money. To fix that, either use <code>irm -f</code> or the icommand to empty it, see <code>irmtrash -h</code>. 


e.g.


  /lustre/backup/WUR�/ABGC/
Because of hardware limitations on the backend tape storage, the filesize limit for our tape archive is 5T.


Note that this map will not be backed up yet. It is planned that daily syncing will commence 1-1-2014.
== See also ==
* [[Tariffs | Costs associated with resource usage]]


== fast unprotected ==
== External links ==
== fast shared ==
* [http://wiki.lustre.org/index.php/Main_Page Lustre website]
== compute nodes scratch ==

Latest revision as of 16:15, 28 November 2025

Anunna currently has multiple filesystem mounts that are available cluster-wide:

Global

  • /home - This mount uses NFS to mount the home directories over the slower internal network from the active master. Each user has a 200G quota for this filesystem,it is regularly backed up to tape, and can reliably be restored from up to a week's history. Use this for programs and configuration files.
  • /shared - This mount provides a consistent set of binaries and configuration files for the entire cluster.
  • /lustre - This large and fast mount uses the Lustre parallel filesystem to provide files from multiple redundant servers over the fast Omnipath network. Access is provided per group, thus:
/lustre/[level]/[partner]/[unit]

e.g.

/lustre/backup/WUR/ABGC/

It comprises of two major parts (and some minor):

  • /lustre/nobackup - This is the 'normal' filesystem for Lustre - no backups, just stored on the filesystem. Without having a backup needed, the cost of data here is not as much as under /lustre/backup, but in case of disaster cannot be recovered.
  • /lustre/backup - In case of disaster, this data is stored a second time on a separate machine. Whilst this backup is purely in case of complete tragedy (such as some immense filesystem error, or multiple component failure), it can potentially be used to revert mistakes if you are very fast about reporting them. There is however no guarantee of this service.
  • /lustre/shared - Same as /lustre/backup, except publicly available. This is where truly shared data lives that isn't assigned to a specific group.

And additionally:

  • /lustre/scratch - Files here may be removed after some time if the filesystem gets too full (Typically 30 days). You should tidy up this data yourself once work is complete.

Private shared directories

If you are working with a group of users on a similar project, you might consider making a Shared directory to coordinate. Information on how to do so is in the linked article.

Local

Specific to certain machines are some other filesystems that are available to you:

  • /archive - an archive mount only accessible from the login nodes. The cost of storing data here is less than on Lustre, but it cannot be used for compute work. This location is only available to WUR users. Files are able to be reverted via backup, however this only comes in fortnightly (14 day) intervals.
  • /tmp - On each worker node there is a /tmp mount that can be used for temporary local caching. Be advised that you should clean this up, lest your files become a hindrance to other users. You can request a node with free space in your sbatch script like so:
#SBATCH --tmp=<required space>
  • /dev/shm - On each worker you may also create a virtual filesystem directly into memory, for extremely fast data access. Be advised that this will count against the memory used for your job, but it is also the fastest available filesystem if needed.

iRods

On Anunna we host our own iRods instance.

With that you can push data to the WUR tape storage for archiving at very low cost.

More info on how to use it, please see https://irods.wur.nl/.

The best course of action is to loosely follow the course, using your own data, and use your personal space for data upload and transfer to tape.

Be sure to check whether the data is correctly stored on tape before you remove your data!

On Anunna there are some differences and additions to the above site:

  • The zone is HPC
  • With iinit you can init the irods env. Use your account password.
  • With ils you can see your available irods collections. You need that as a destination location for itape
  • We have a function to ease uploads (use -h for help) : itape
  • We have aliases to ease checking of the status of your archive process. (it takes a while) : itapestat and itapestatnp, the first is for human use, is shows a paginated status of all your files. The latter dumps all the info, so you can e.g. use grep to filter.
  • If you remove data with irm within iRODS, the data isn't actually removed but moved to a trashbin. The advantage is that you can retrieve it if the removal was in error, the disadvantage is that the data will keep costing money. To fix that, either use irm -f or the icommand to empty it, see irmtrash -h.


Because of hardware limitations on the backend tape storage, the filesize limit for our tape archive is 5T.

See also

External links