<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Haars001</id>
	<title>HPCwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Haars001"/>
	<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php/Special:Contributions/Haars001"/>
	<updated>2026-04-18T17:16:22Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2165</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2165"/>
		<updated>2022-05-10T08:19:27Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Fix markup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
&lt;br /&gt;
=== CONFIGURATION – MATLAB client on the cluster === &lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script &#039;&#039;&#039;configCluster.sh&#039;&#039;&#039;. (after &#039;&#039;&#039;module load matlab&#039;&#039;&#039;)&lt;br /&gt;
This only needs to be called once per version of MATLAB.&lt;br /&gt;
 $ module load matlab&lt;br /&gt;
 $ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the login node.&lt;br /&gt;
=== INSTALLATION and CONFIGURATION – MATLAB client on the desktop === &lt;br /&gt;
&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
&lt;br /&gt;
Windows: https://git.wur.nl/WUR-MATLAB-tools/support_packages/-/raw/main/wur.nonshared.R2022a.zip?inline=false&lt;br /&gt;
&lt;br /&gt;
Linux/macOS: https://git.wur.nl/WUR-MATLAB-tools/support_packages/-/raw/main/wur.nonshared.R2022a.tar.gz?inline=false&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
 &amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling &#039;&#039;&#039;configCluster&#039;&#039;&#039;.  &#039;&#039;&#039;configCluster&#039;&#039;&#039; only needs to be called once per version of MATLAB.&lt;br /&gt;
 &amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
==== CONFIGURING JOBS==== &lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
 &amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
==== INTERACTIVE JOBS - MATLAB client on the cluster ==== &lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
Be aware that (depending on load on the cluster) it might take a while for your requested resources to be available.&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
 &amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
       a(idx) = …&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
 &amp;gt;&amp;gt; pool.delete&lt;br /&gt;
&lt;br /&gt;
==== INDEPENDENT BATCH JOB ====&lt;br /&gt;
&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
 &amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
 &amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
==== PARALLEL BATCH JOB ====&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as &#039;&#039;&#039;parallel_example.m&#039;&#039;&#039;.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.&lt;br /&gt;
	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
==== DEBUGGING ====&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== TO LEARN MORE ====&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
* [https://www.mathworks.com/help/parallel-computing/examples.html|Parallel Computing Coding Examples]&lt;br /&gt;
* [http://www.mathworks.com/help/distcomp/index.html|Parallel Computing Documentation]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/index.html|Parallel Computing Overview]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/tutorials.html|Parallel Computing Tutorials]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/videos.html|Parallel Computing Videos]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/webinars.html|Parallel Computing Webinars]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2164</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2164"/>
		<updated>2022-05-10T08:18:45Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Added links to support packages.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
&lt;br /&gt;
=== CONFIGURATION – MATLAB client on the cluster === &lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script &#039;&#039;&#039;configCluster.sh&#039;&#039;&#039;. (after &#039;&#039;&#039;module load matlab&#039;&#039;&#039;)&lt;br /&gt;
This only needs to be called once per version of MATLAB.&lt;br /&gt;
 $ module load matlab&lt;br /&gt;
 $ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the login node.&lt;br /&gt;
=== INSTALLATION and CONFIGURATION – MATLAB client on the desktop === &lt;br /&gt;
&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
&lt;br /&gt;
[Windows|https://git.wur.nl/WUR-MATLAB-tools/support_packages/-/raw/main/wur.nonshared.R2022a.zip?inline=false]&lt;br /&gt;
&lt;br /&gt;
[Linux/macOS|https://git.wur.nl/WUR-MATLAB-tools/support_packages/-/raw/main/wur.nonshared.R2022a.tar.gz?inline=false]&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
 &amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling &#039;&#039;&#039;configCluster&#039;&#039;&#039;.  &#039;&#039;&#039;configCluster&#039;&#039;&#039; only needs to be called once per version of MATLAB.&lt;br /&gt;
 &amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
==== CONFIGURING JOBS==== &lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
 &amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
==== INTERACTIVE JOBS - MATLAB client on the cluster ==== &lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
Be aware that (depending on load on the cluster) it might take a while for your requested resources to be available.&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
 &amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
       a(idx) = …&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
 &amp;gt;&amp;gt; pool.delete&lt;br /&gt;
&lt;br /&gt;
==== INDEPENDENT BATCH JOB ====&lt;br /&gt;
&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
 &amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
 &amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
==== PARALLEL BATCH JOB ====&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as &#039;&#039;&#039;parallel_example.m&#039;&#039;&#039;.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.&lt;br /&gt;
	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
==== DEBUGGING ====&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== TO LEARN MORE ====&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
* [https://www.mathworks.com/help/parallel-computing/examples.html|Parallel Computing Coding Examples]&lt;br /&gt;
* [http://www.mathworks.com/help/distcomp/index.html|Parallel Computing Documentation]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/index.html|Parallel Computing Overview]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/tutorials.html|Parallel Computing Tutorials]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/videos.html|Parallel Computing Videos]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/webinars.html|Parallel Computing Webinars]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2163</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2163"/>
		<updated>2022-05-09T08:49:42Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* INTERACTIVE JOBS - MATLAB client on the cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
&lt;br /&gt;
=== CONFIGURATION – MATLAB client on the cluster === &lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script &#039;&#039;&#039;configCluster.sh&#039;&#039;&#039;. (after &#039;&#039;&#039;module load matlab&#039;&#039;&#039;)&lt;br /&gt;
This only needs to be called once per version of MATLAB.&lt;br /&gt;
 $ module load matlab&lt;br /&gt;
 $ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the login node.&lt;br /&gt;
=== INSTALLATION and CONFIGURATION – MATLAB client on the desktop === &lt;br /&gt;
&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
&lt;br /&gt;
Windows: 	TBD&lt;br /&gt;
&lt;br /&gt;
Linux/macOS: 	TBD&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
 &amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling &#039;&#039;&#039;configCluster&#039;&#039;&#039;.  &#039;&#039;&#039;configCluster&#039;&#039;&#039; only needs to be called once per version of MATLAB.&lt;br /&gt;
 &amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
==== CONFIGURING JOBS==== &lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
 &amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
==== INTERACTIVE JOBS - MATLAB client on the cluster ==== &lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
Be aware that (depending on load on the cluster) it might take a while for your requested resources to be available.&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
 &amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
       a(idx) = …&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
 &amp;gt;&amp;gt; pool.delete&lt;br /&gt;
&lt;br /&gt;
==== INDEPENDENT BATCH JOB ====&lt;br /&gt;
&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
 &amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
 &amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
==== PARALLEL BATCH JOB ====&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as &#039;&#039;&#039;parallel_example.m&#039;&#039;&#039;.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.&lt;br /&gt;
	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
==== DEBUGGING ====&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== TO LEARN MORE ====&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
* [https://www.mathworks.com/help/parallel-computing/examples.html|Parallel Computing Coding Examples]&lt;br /&gt;
* [http://www.mathworks.com/help/distcomp/index.html|Parallel Computing Documentation]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/index.html|Parallel Computing Overview]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/tutorials.html|Parallel Computing Tutorials]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/videos.html|Parallel Computing Videos]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/webinars.html|Parallel Computing Webinars]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2162</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2162"/>
		<updated>2022-05-09T08:45:54Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* TO LEARN MORE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
&lt;br /&gt;
=== CONFIGURATION – MATLAB client on the cluster === &lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script &#039;&#039;&#039;configCluster.sh&#039;&#039;&#039;. (after &#039;&#039;&#039;module load matlab&#039;&#039;&#039;)&lt;br /&gt;
This only needs to be called once per version of MATLAB.&lt;br /&gt;
 $ module load matlab&lt;br /&gt;
 $ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the login node.&lt;br /&gt;
=== INSTALLATION and CONFIGURATION – MATLAB client on the desktop === &lt;br /&gt;
&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
&lt;br /&gt;
Windows: 	TBD&lt;br /&gt;
&lt;br /&gt;
Linux/macOS: 	TBD&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
 &amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling &#039;&#039;&#039;configCluster&#039;&#039;&#039;.  &#039;&#039;&#039;configCluster&#039;&#039;&#039; only needs to be called once per version of MATLAB.&lt;br /&gt;
 &amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
==== CONFIGURING JOBS==== &lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
 &amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
==== INTERACTIVE JOBS - MATLAB client on the cluster ==== &lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
 &amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
       a(idx) = …&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
 &amp;gt;&amp;gt; pool.delete&lt;br /&gt;
==== INDEPENDENT BATCH JOB ====&lt;br /&gt;
&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
 &amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
 &amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
==== PARALLEL BATCH JOB ====&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as &#039;&#039;&#039;parallel_example.m&#039;&#039;&#039;.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.&lt;br /&gt;
	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
==== DEBUGGING ====&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== TO LEARN MORE ====&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
* [https://www.mathworks.com/help/parallel-computing/examples.html|Parallel Computing Coding Examples]&lt;br /&gt;
* [http://www.mathworks.com/help/distcomp/index.html|Parallel Computing Documentation]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/index.html|Parallel Computing Overview]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/tutorials.html|Parallel Computing Tutorials]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/videos.html|Parallel Computing Videos]&lt;br /&gt;
* [http://www.mathworks.com/products/parallel-computing/webinars.html|Parallel Computing Webinars]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2161</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2161"/>
		<updated>2022-05-09T08:44:50Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Markup improved&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
&lt;br /&gt;
=== CONFIGURATION – MATLAB client on the cluster === &lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script &#039;&#039;&#039;configCluster.sh&#039;&#039;&#039;. (after &#039;&#039;&#039;module load matlab&#039;&#039;&#039;)&lt;br /&gt;
This only needs to be called once per version of MATLAB.&lt;br /&gt;
 $ module load matlab&lt;br /&gt;
 $ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the login node.&lt;br /&gt;
=== INSTALLATION and CONFIGURATION – MATLAB client on the desktop === &lt;br /&gt;
&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
&lt;br /&gt;
Windows: 	TBD&lt;br /&gt;
&lt;br /&gt;
Linux/macOS: 	TBD&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
 &amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling &#039;&#039;&#039;configCluster&#039;&#039;&#039;.  &#039;&#039;&#039;configCluster&#039;&#039;&#039; only needs to be called once per version of MATLAB.&lt;br /&gt;
 &amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
==== CONFIGURING JOBS==== &lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
 &amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
 &amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
 &amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
==== INTERACTIVE JOBS - MATLAB client on the cluster ==== &lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
 &amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
 &amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
       a(idx) = …&lt;br /&gt;
    end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
 &amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
 &amp;gt;&amp;gt; pool.delete&lt;br /&gt;
==== INDEPENDENT BATCH JOB ====&lt;br /&gt;
&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
 &amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
 &amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
 &amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
 &amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
 &amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
==== PARALLEL BATCH JOB ====&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as &#039;&#039;&#039;parallel_example.m&#039;&#039;&#039;.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.&lt;br /&gt;
	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
==== DEBUGGING ====&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
 &amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
==== TO LEARN MORE ====&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
•	[https://www.mathworks.com/help/parallel-computing/examples.html|Parallel Computing Coding Examples]&lt;br /&gt;
•	[http://www.mathworks.com/help/distcomp/index.html|Parallel Computing Documentation]&lt;br /&gt;
•	[http://www.mathworks.com/products/parallel-computing/index.html|Parallel Computing Overview]&lt;br /&gt;
•	[http://www.mathworks.com/products/parallel-computing/tutorials.html|Parallel Computing Tutorials]&lt;br /&gt;
•	[http://www.mathworks.com/products/parallel-computing/videos.html|Parallel Computing Videos]&lt;br /&gt;
•	[http://www.mathworks.com/products/parallel-computing/webinars.html|Parallel Computing Webinars]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2160</id>
		<title>Matlab</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Matlab&amp;diff=2160"/>
		<updated>2022-05-09T08:30:45Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Replaced with content provided by Mathworks&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;MATLAB is a non-free calculation language owned by Mathworks. The HPC has this installed as part of the WUR academic license, and so this is only available to WUR users.&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Parallel Computing using MATLAB on the Anunna HPC Cluster ==&lt;br /&gt;
&lt;br /&gt;
This document provides the steps to configure MATLAB to submit jobs to a cluster, retrieve results, and debug errors.&lt;br /&gt;
CONFIGURATION – MATLAB client on the cluster&lt;br /&gt;
After logging into the cluster, configure MATLAB to run parallel jobs on your cluster by calling the shell script configCluster.sh   This only needs to be called once per version of MATLAB.&lt;br /&gt;
$ module load matlab&lt;br /&gt;
$ configCluster.sh&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
INSTALLATION and CONFIGURATION – MATLAB client on the desktop&lt;br /&gt;
The Anunna MATLAB support package can be found as follows&lt;br /&gt;
Windows: 	TBD&lt;br /&gt;
Linux/macOS: 	TBD&lt;br /&gt;
&lt;br /&gt;
Download the appropriate archive file and start MATLAB.  The archive file should be untarred/unzipped in the location returned by calling&lt;br /&gt;
&amp;gt;&amp;gt; userpath&lt;br /&gt;
Configure MATLAB to run parallel jobs on your cluster by calling configCluster.  configCluster only needs to be called once per version of MATLAB.&lt;br /&gt;
&amp;gt;&amp;gt; configCluster&lt;br /&gt;
Submission to the remote cluster requires SSH credentials.  You will be prompted for your ssh username and password or identity file (private key).  The username and location of the private key will be stored in MATLAB for future sessions.&lt;br /&gt;
Jobs will now default to the cluster rather than submit to the local machine.&lt;br /&gt;
NOTE: If you would like to submit to the local machine then run the following command:&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the local resources&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster(&#039;local&#039;);&lt;br /&gt;
CONFIGURING JOBS&lt;br /&gt;
Prior to submitting the job, we can specify various parameters to pass to our jobs, such as queue, e-mail, walltime, etc.  The following is a partial list of parameters.  See AdditionalProperties for the complete list.  Only MemUsage and WallTime are required. &lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
[REQUIRED]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify memory to use for MATLAB jobs, per core (default: 4gb)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.MemUsage = &#039;6gb&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify the walltime (e.g., 5 hours)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.WallTime = &#039;05:00:00&#039;;&lt;br /&gt;
&lt;br /&gt;
[OPTIONAL]&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify an account to use for MATLAB jobs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.AccountName = &#039;account-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Assign a comment to the job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Comment = &#039;a-comment&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request a specific GPU flavor (e.g., V100)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Constraint = &#039;V100&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify e-mail address to receive notifications about your job&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;user-id@wur.nl&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify number of GPUs&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.GpusPerNode = 1;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a QoS (default: std)&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QoS = &#039;the-qos&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a queue to use for MATLAB jobs				&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.QueueName = &#039;queue-name&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Require exclusive nodes&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.RequireExclusiveNode = true;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Specify a reservation&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Reservation = &#039;a-reservation&#039;;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Request there be (for example) 20 GB of local disk space in /tmp&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.Tmp = 20g;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Save changes after modifying AdditionalProperties for the above changes to persist between MATLAB sessions.&lt;br /&gt;
&amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
&lt;br /&gt;
To see the values of the current configuration options, display AdditionalProperties.&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % To view current properties&lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties&lt;br /&gt;
&lt;br /&gt;
Unset a value when no longer needed.&lt;br /&gt;
&amp;gt;&amp;gt; % Turn off email notifications &lt;br /&gt;
&amp;gt;&amp;gt; c.AdditionalProperties.EmailAddress = &#039;&#039;;&lt;br /&gt;
&amp;gt;&amp;gt; c.saveProfile&lt;br /&gt;
INTERACTIVE JOBS - MATLAB client on the cluster&lt;br /&gt;
To run an interactive pool job on the cluster, continue to use parpool as you’ve done before.&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Open a pool of 64 workers on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; pool = c.parpool(64);&lt;br /&gt;
&lt;br /&gt;
Rather than running local on the local machine, the pool can now run across multiple nodes on the cluster.&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Run a parfor over 1000 iterations&lt;br /&gt;
&amp;gt;&amp;gt; parfor idx = 1:1000&lt;br /&gt;
      a(idx) = …&lt;br /&gt;
   end&lt;br /&gt;
&lt;br /&gt;
Once we’re done with the pool, delete it.&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the pool&lt;br /&gt;
&amp;gt;&amp;gt; pool.delete&lt;br /&gt;
INDEPENDENT BATCH JOB&lt;br /&gt;
Use the batch command to submit asynchronous jobs to the cluster.  The batch command will return a job object which is used to access the output of the submitted job.  See the MATLAB documentation for more help on batch.&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit job to query where MATLAB is running on the cluster&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@pwd, 1, {}, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Query job for state&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % If state is finished, fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Delete the job after results are no longer needed&lt;br /&gt;
&amp;gt;&amp;gt; job.delete&lt;br /&gt;
&lt;br /&gt;
To retrieve a list of currently running or completed jobs, call parcluster to retrieve the cluster object.  The cluster object stores an array of jobs that were run, are running, or are queued to run.  This allows us to fetch the results of completed jobs.  Retrieve and view the list of jobs as shown below.&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&amp;gt;&amp;gt; jobs = c.Jobs;&lt;br /&gt;
Once we’ve identified the job we want, we can retrieve the results as we’ve done previously. &lt;br /&gt;
fetchOutputs is used to retrieve function output arguments; if calling batch with a script, use load instead.   Data that has been written to files on the cluster needs be retrieved directly from the file system (e.g. via ftp).&lt;br /&gt;
To view results of a previously completed job:&lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the job with ID 2&lt;br /&gt;
&amp;gt;&amp;gt; job2 = c.Jobs(2);&lt;br /&gt;
&lt;br /&gt;
NOTE: You can view a list of your jobs, as well as their IDs, using the above c.Jobs command.  &lt;br /&gt;
&amp;gt;&amp;gt; % Fetch results for job with ID 2&lt;br /&gt;
&amp;gt;&amp;gt; job2.fetchOutputs{:}&lt;br /&gt;
PARALLEL BATCH JOB&lt;br /&gt;
Users can also submit parallel workflows with the batch command.  Let’s use the following example for a parallel job, which is saved as parallel_example.m.   &lt;br /&gt;
function [t, A] = parallel_example(iter)&lt;br /&gt;
 &lt;br /&gt;
if nargin==0&lt;br /&gt;
    iter = 8;&lt;br /&gt;
end&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Start sim&#039;)&lt;br /&gt;
 &lt;br /&gt;
t0 = tic;&lt;br /&gt;
parfor idx = 1:iter&lt;br /&gt;
    A(idx) = idx;&lt;br /&gt;
    pause(2)&lt;br /&gt;
    idx&lt;br /&gt;
end&lt;br /&gt;
t = toc(t0);&lt;br /&gt;
 &lt;br /&gt;
disp(&#039;Sim completed&#039;)&lt;br /&gt;
 &lt;br /&gt;
save RESULTS A&lt;br /&gt;
 &lt;br /&gt;
end&lt;br /&gt;
&lt;br /&gt;
This time when we use the batch command, to run a parallel job, we’ll also specify a MATLAB Pool.     &lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 4 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;,4, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % View current job status&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results after a finished state is retrieved&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:}&lt;br /&gt;
ans = &lt;br /&gt;
	8.8872&lt;br /&gt;
The job ran in 8.89 seconds using four workers.  Note that these jobs will always request N+1 CPU cores, since one worker is required to manage the batch job and pool of workers.   For example, a job that needs eight workers will consume nine CPU cores.  	&lt;br /&gt;
We’ll run the same simulation but increase the Pool size.  This time, to retrieve the results later, we’ll keep track of the job ID.&lt;br /&gt;
NOTE: For some applications, there will be a diminishing return when allocating too many workers, as the overhead may exceed computation time.    &lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Submit a batch pool job using 8 workers for 16 simulations&lt;br /&gt;
&amp;gt;&amp;gt; job = c.batch(@parallel_example, 1, {16}, &#039;Pool&#039;, 8, …&lt;br /&gt;
       &#039;CurrentFolder&#039;,&#039;.&#039;, &#039;AutoAddClientPath&#039;,false);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Get the job ID&lt;br /&gt;
&amp;gt;&amp;gt; id = job.ID&lt;br /&gt;
id =&lt;br /&gt;
	4&lt;br /&gt;
&amp;gt;&amp;gt; % Clear job from workspace (as though we quit MATLAB)&lt;br /&gt;
&amp;gt;&amp;gt; clear job&lt;br /&gt;
Once we have a handle to the cluster, we’ll call the findJob method to search for the job with the specified job ID.   &lt;br /&gt;
&amp;gt;&amp;gt; % Get a handle to the cluster&lt;br /&gt;
&amp;gt;&amp;gt; c = parcluster;&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Find the old job&lt;br /&gt;
&amp;gt;&amp;gt; job = c.findJob(&#039;ID&#039;, 4);&lt;br /&gt;
&lt;br /&gt;
&amp;gt;&amp;gt; % Retrieve the state of the job&lt;br /&gt;
&amp;gt;&amp;gt; job.State&lt;br /&gt;
ans = &lt;br /&gt;
finished&lt;br /&gt;
&amp;gt;&amp;gt; % Fetch the results&lt;br /&gt;
&amp;gt;&amp;gt; job.fetchOutputs{:};&lt;br /&gt;
ans = &lt;br /&gt;
4.7270&lt;br /&gt;
The job now runs in 4.73 seconds using eight workers.  Run code with different number of workers to determine the ideal number to use.&lt;br /&gt;
Alternatively, to retrieve job results via a graphical user interface, use the Job Monitor (Parallel &amp;gt; Monitor Jobs).&lt;br /&gt;
 &lt;br /&gt;
DEBUGGING&lt;br /&gt;
If a serial job produces an error, call the getDebugLog method to view the error log file.  When submitting independent jobs, with multiple tasks, specify the task number.  &lt;br /&gt;
&amp;gt;&amp;gt; c.getDebugLog(job.Tasks(3))&lt;br /&gt;
For Pool jobs, only specify the job object.&lt;br /&gt;
&amp;gt;&amp;gt; c.getDebugLog(job)&lt;br /&gt;
When troubleshooting a job, the cluster admin may request the scheduler ID of the job.  This can be derived by calling schedID&lt;br /&gt;
&amp;gt;&amp;gt; schedID(job)&lt;br /&gt;
ans = &lt;br /&gt;
25539&lt;br /&gt;
TO LEARN MORE&lt;br /&gt;
To learn more about the MATLAB Parallel Computing Toolbox, check out these resources:&lt;br /&gt;
•	Parallel Computing Coding Examples&lt;br /&gt;
•	Parallel Computing Documentation&lt;br /&gt;
•	Parallel Computing Overview&lt;br /&gt;
•	Parallel Computing Tutorials&lt;br /&gt;
•	Parallel Computing Videos&lt;br /&gt;
•	Parallel Computing Webinars&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2152</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2152"/>
		<updated>2021-12-09T13:54:58Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh]]&lt;br /&gt;
* [[file_transfer | File transfer options]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Data storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
* [[Running scripts on a fixed timeschedule (cron)]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Product Owner of Anunna is Alexander van Ittersum (Wageningen UR,FB-IT, C&amp;amp;PS). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, C&amp;amp;PS)]] and [[User:haars001 | Jan van Haarst (Wageningen UR,FB-IT, C&amp;amp;PS)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2151</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2151"/>
		<updated>2021-11-25T17:37:27Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Working with shared folders on Anunna =&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team, it is sometimes useful to work within a shared space. Users can thus share inputs to their models and make their outputs also easily available to each other. This article explains how to do so within the Lustre file system and home or archive folder (NFS).&lt;br /&gt;
&lt;br /&gt;
There are two main methods available to you: Access Control List (ACL) access (that you can administer yourself), group access with AD rights or group access within Anunna (which are centrally administered).&lt;br /&gt;
&lt;br /&gt;
Below we will split out the options for each method.&lt;br /&gt;
&lt;br /&gt;
== ACL shared directories ==&lt;br /&gt;
=== ACL shared directories on Lustre ===&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
=== ACL shared directories on NFS folders ===&lt;br /&gt;
&lt;br /&gt;
If you want to share e.g. you home folder with another user, follow these steps:&lt;br /&gt;
&lt;br /&gt;
==== Set access rights on folder ====&lt;br /&gt;
&lt;br /&gt;
If you want to e.g. allow somebody (as identified by their user id) full read access on your homefolder, run this :&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs4_setfacl -a A::haars001@wurnet.nl:RX $HOME&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Group shared directories ==&lt;br /&gt;
&lt;br /&gt;
Users access the Anunna cluster with their WUR-wide (Active Directory) or Anunna only account. This means that all the membership information of the AD is also available on Anunna. To check of which groups your user is a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the system. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
If a group isn&#039;t available (cooperation with people outside WUR), please ask the administrators for help, they can then set up a group for you.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared Lustre folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2150</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2150"/>
		<updated>2021-11-25T17:29:47Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Working with shared folders on Anunna =&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team, it is sometimes useful to work within a shared space. Users can thus share inputs to their models and make their outputs also easily available to each other. This article explains how to do so within the Lustre file system and home or archive folder (NFS).&lt;br /&gt;
&lt;br /&gt;
There are two main methods available to you: Access Control List (ACL) access (that you can administer yourself), group access with AD rights or group access within Anunna (which are centrally administered).&lt;br /&gt;
&lt;br /&gt;
Below we will split out the options for each method.&lt;br /&gt;
&lt;br /&gt;
== ACL shared directories ==&lt;br /&gt;
=== ACL shared directories on Lustre ===&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
=== ACL shared directories on NFS folders ===&lt;br /&gt;
&lt;br /&gt;
If you want to share e.g. you home folder with another user, follow these steps:&lt;br /&gt;
==== Get user information ====&lt;br /&gt;
To be able to share data with a single user, you will need that persons user-id. &lt;br /&gt;
You can retrieve that with&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
id my_friend| cut -f 1 -d &#039; &#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set access rights on folder ====&lt;br /&gt;
&lt;br /&gt;
If you want to e.g. allow somebody (as identified by their user id) full read access on your homefolder, run this :&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs4_setfacl -a A::haars001@wurnet.nl:RX $HOME&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Group shared directories ==&lt;br /&gt;
&lt;br /&gt;
Users access the Anunna cluster with their WUR-wide (Active Directory) or Anunna only account. This means that all the membership information of the AD is also available on Anunna. To check of which groups your user is a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the system. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
If a group isn&#039;t available (cooperation with people outside WUR), please ask the administrators for help, they can then set up a group for you.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared Lustre folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2148</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2148"/>
		<updated>2021-11-22T11:00:45Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Working with shared folders on Anunna =&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team, it is sometimes useful to work within a shared space. Users can thus share inputs to their models and make their outputs also easily available to each other. This article explains how to do so within the Lustre file system and home or archive folder (NFS).&lt;br /&gt;
&lt;br /&gt;
There are two main methods available to you: Access Control List (ACL) access (that you can administer yourself), group access with AD rights or group access within Anunna (which are centrally administered).&lt;br /&gt;
&lt;br /&gt;
Below we will split out the options for each method.&lt;br /&gt;
&lt;br /&gt;
== ACL shared directories ==&lt;br /&gt;
=== ACL shared directories on Lustre ===&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
=== ACL shared directories on NFS folders ===&lt;br /&gt;
&lt;br /&gt;
If you want to share e.g. you home folder with another user, follow these steps:&lt;br /&gt;
==== Get user information ====&lt;br /&gt;
To be able to share data with a single user, you will need that persons user-id. &lt;br /&gt;
You can retrieve that with&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
id my_friend| cut -f 1 -d &#039; &#039;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Set access rights on folder ====&lt;br /&gt;
&lt;br /&gt;
If you want to e.g. allow somebody (as identified by their user id) full read access on your homefolder, run this :&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs4_setfacl -R -a A:df:16825946:RX $HOME&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Group shared directories ==&lt;br /&gt;
&lt;br /&gt;
Users access the Anunna cluster with their WUR-wide (Active Directory) or Anunna only account. This means that all the membership information of the AD is also available on Anunna. To check of which groups your user is a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the system. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
If a group isn&#039;t available (cooperation with people outside WUR), please ask the administrators for help, they can then set up a group for you.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared Lustre folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2147</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2147"/>
		<updated>2021-11-22T10:38:40Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Working with shared folders on Anunna =&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team, it is sometimes useful to work within a shared space. Users can thus share inputs to their models and make their outputs also easily available to each other. This article explains how to do so within the Lustre file system and home or archive folder (NFS).&lt;br /&gt;
&lt;br /&gt;
There are three main methods available to you: Access Control List (ACL) access (that you can administer yourself), group access with AD rights or group access within Anunna(which is centrally administered).&lt;br /&gt;
&lt;br /&gt;
Below we will split out the options for each type of storage (Lustre first, and then the home folder)&lt;br /&gt;
&lt;br /&gt;
== Working with shared folders in the Lustre file system ==&lt;br /&gt;
&lt;br /&gt;
=== ACL shared directories ===&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
=== Group shared directories ===&lt;br /&gt;
&lt;br /&gt;
Users access Anunna cluster with their WUR-wide account. This means that all the membership information is also available on Anunna. To check of which groups is your user a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the overall WUR systems. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2146</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2146"/>
		<updated>2021-11-22T08:47:19Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Working with shared folders in the Lustre file system =&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team and use large volumes of data, it is useful to work within a shared space. User can thus share inputs to their models and make their outputs also easily available. This article explains how to do so within the Lustre file system, that presently supports Anunna.&lt;br /&gt;
&lt;br /&gt;
There are two main methods available to you: Access Control List (ACL) access (that you can administer yourself) or group access (that is centrally administered).&lt;br /&gt;
&lt;br /&gt;
== ACL shared directories ==&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
== Group shared directories ==&lt;br /&gt;
&lt;br /&gt;
Users access Anunna cluster with their WUR-wide account. This means that all the membership information is also available on Anunna. To check of which groups is your user a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the overall WUR systems. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2109</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2109"/>
		<updated>2021-06-29T07:08:08Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh]]&lt;br /&gt;
* [[file_transfer | File transfer options]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Data storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
* [[Running scripts on a fixed timeschedule (cron)]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Services&amp;diff=2108</id>
		<title>Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Services&amp;diff=2108"/>
		<updated>2021-06-25T12:05:58Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Several additional services are also attached to Anunna&#039;s environment:&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.anunna.wur.nl This wiki] is what you&#039;re looking at now.&lt;br /&gt;
* [https://mail.anunna.wur.nl Mailman] is in charge of whichever mailing lists are organised for Anunna.&lt;br /&gt;
* [https://galaxy.anunna.wur.nl Galaxy] is a point-and-click friendly way of running genomics pipelines, which offloads the workload onto Anunna.&lt;br /&gt;
* &amp;lt;s&amp;gt;[https://rstudio.anunna.wur.nl R Studio Server] is a web-accessible R interpreter.&amp;lt;/s&amp;gt;&lt;br /&gt;
* [https://notebook.anunna.wur.nl Jupyterhub] , the Jupyter notebook server, is a web-accessible environment for many languages. Primarily Python, includes R, octave, Julia etc.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2105</id>
		<title>File transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2105"/>
		<updated>2021-02-10T19:10:32Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
== Samba/CIFS based protocols ==&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
You can enter these in the location bar of File Explorer.&lt;br /&gt;
&lt;br /&gt;
== rclone to OneDrive ==&lt;br /&gt;
&lt;br /&gt;
To easily transfer data to OneDrive, one can use &#039;&#039;&#039;&#039;&#039;rclone&#039;&#039;&#039;&#039;&#039;.&lt;br /&gt;
It is usable through the modules system, so you can access it through&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load rclone&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be able to access your own OneDrive space, you will need to configure rclone.&lt;br /&gt;
In one of the steps, rclone starts a webserver, so we will have to use SSH to create a tunnel to the login server, so we can access it on your computer.&lt;br /&gt;
&lt;br /&gt;
So first connect to anunna, with the tunnel:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh user@login.anunna.wur.nl -L53682:127.0.0.1:53682&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then load the rclone module (see above), and start the configure process by entering&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to create an new config (I will use remote as name, but you can choose what you want), and select &#039;&#039;&#039;&#039;&#039;onedrive&#039;&#039;&#039;&#039;&#039; as the type.&lt;br /&gt;
Use defaults for next few steps, and choose &#039;&#039;&#039;&#039;&#039;global&#039;&#039;&#039;&#039;&#039; for the region.&lt;br /&gt;
Don&#039;t start the advanced config, and do use auto config.&lt;br /&gt;
Copy the URL to your local web browser, and enter your WUR credentials when asked.&lt;br /&gt;
In the next steps, you will have to select onedrive, the WUR uses the Business version.&lt;br /&gt;
If all is well, you now have a working config.&lt;br /&gt;
&lt;br /&gt;
To test. do this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone tree remote:&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If all is well, you see the content of your own OneDrive.&lt;br /&gt;
Now you can create folders, and copy data to and from your OneDrive:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone copy --create-empty-src-dirs --copy-links --progress bin/ remote:test&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As soon as your are done, please remove your config:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone config delete remote&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Anunna is relatively safe, but better safe than sorry.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Log_in_to_Anunna | Log in to Anunna]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;br /&gt;
* [https://rclone.org/onedrive/ Info on adding Microsoft OneDrive from the rclone website]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2104</id>
		<title>File transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2104"/>
		<updated>2021-02-10T18:14:39Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
== Samba/CIFS based protocols ==&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
You can enter these in the location bar of File Explorer.&lt;br /&gt;
&lt;br /&gt;
== rclone to OneDrive ==&lt;br /&gt;
&lt;br /&gt;
To easily transfer data to OneDrive, one can use &#039;&#039;&#039;&#039;&#039;rclone&#039;&#039;&#039;&#039;&#039;.&lt;br /&gt;
It is usable through the modules system, so you can access it through&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load rclone&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To be able to access your own OneDrive space, you will need to configure rclone.&lt;br /&gt;
In one of the steps, rclone starts a webserver, so we will have to use SSH to create a tunnel to the login server, so we can access it on your computer.&lt;br /&gt;
&lt;br /&gt;
So first connect to anunna, with the tunnel:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh user@login.anunna.wur.nl -L53682:127.0.0.1:53682&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then load the rclone module (see above), and start the configure process by entering&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone config&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You will then have to create an new config (I will use remote as name, but you can choose what you want), and select &#039;&#039;&#039;&#039;&#039;onedrive&#039;&#039;&#039;&#039;&#039; as the type.&lt;br /&gt;
Use defaults for next few steps, and choose &#039;&#039;&#039;&#039;&#039;global&#039;&#039;&#039;&#039;&#039; for the region.&lt;br /&gt;
Don&#039;t start the advanced config, and do use auto config.&lt;br /&gt;
Copy the URL to your local web browser, and enter your WUR credentials when asked.&lt;br /&gt;
In the next steps, you will have to select onedrive, the WUR uses the Business version.&lt;br /&gt;
If all is well, you now have a working config.&lt;br /&gt;
&lt;br /&gt;
To test. do this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone tree remote:&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If all is well, you see the content of your own OneDrive.&lt;br /&gt;
Now you can create folders, and copy data to and from your OneDrive:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone copy --create-empty-src-dirs --copy-links --progress bin/ remote:test&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
As soon as your are done, please remove your config:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rclone config delete remote&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Anunna is relatively safe, but better safe than sorry.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Log_in_to_Anunna | Log in to Anunna]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2103</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2103"/>
		<updated>2021-02-10T17:55:57Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* See also */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log in to [[Anunna | Anunna]] using ssh (default port tcp 22). The address of the login server is:&lt;br /&gt;
  login.anunna.wur.nl&lt;br /&gt;
&lt;br /&gt;
You will be automatically redirected to the currently valid login server. To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Windows an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the Login server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[Using_Slurm  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@login.anunna.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [login.anunna.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
In a complete emergency, it is then possible to log on to any of the worker nodes via the login node. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. This is not normally allowed - be aware that running tasks outside of SLURM is prohibited, but so far there has not been any serious abuse of this. This is provided to allow you to get a little more insight in what your job is doing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, it is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Using _Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
* [[File_transfer | File transfer options]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2102</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2102"/>
		<updated>2021-02-10T17:18:09Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log in to [[Anunna | Anunna]] using ssh (default port tcp 22). The address of the login server is:&lt;br /&gt;
  login.anunna.wur.nl&lt;br /&gt;
&lt;br /&gt;
You will be automatically redirected to the currently valid login server. To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Windows an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the Login server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[Using_Slurm  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@login.anunna.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [login.anunna.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
In a complete emergency, it is then possible to log on to any of the worker nodes via the login node. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. This is not normally allowed - be aware that running tasks outside of SLURM is prohibited, but so far there has not been any serious abuse of this. This is provided to allow you to get a little more insight in what your job is doing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, it is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Using _Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2101</id>
		<title>File transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2101"/>
		<updated>2021-02-10T17:18:01Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* File transfer using ssh-based file transfer protocols */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
== Samba/CIFS based protocols ==&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
You can enter these in the location bar of File Explorer.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Log_in_to_Anunna | Log in to Anunna]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2100</id>
		<title>File transfer</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File_transfer&amp;diff=2100"/>
		<updated>2021-02-10T17:17:35Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Created page with &amp;quot;== File transfer using ssh-based file transfer protocols == === Copying files to/from the cluster: scp ===  From any Posix-compliant system (Linux/MacOSX) terminal files and f...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
=== Samba/CIFS based protocols ===&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
You can enter these in the location bar of File Explorer.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Log_in_to_Anunna | Log in to Anunna]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2099</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2099"/>
		<updated>2021-02-10T17:15:24Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Gaining access to Anunna */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh]]&lt;br /&gt;
* [[file_transfer | File transfer options]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
* [[Running scripts on a fixed timeschedule (cron)]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2098</id>
		<title>Ssh without password</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2098"/>
		<updated>2021-01-08T10:22:53Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure shell (ssh) protocols can be configure to work without protocols. This is particularly helpful for machines that are used often. &lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password from a POSIX-compliant terminal ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: create a public key and copy to remote computer ===&lt;br /&gt;
* Log into a local Linux or MacOSX computer&lt;br /&gt;
* Type the following to generate the ssh key:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-keygen -t ed25519 -a 200 -C $USER@$(hostname)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Accept the default key location by pressing &amp;lt;code&amp;gt;Enter&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Please use a different password/passphrase for your SSH key than your WUR password.&lt;br /&gt;
* Secure permission of your authentication keys by closing permission to your home directory, .ssh directory, and authentication files&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod go-wx $HOME&lt;br /&gt;
chmod 700 $HOME/.ssh&lt;br /&gt;
chmod 600 $HOME/.ssh/*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Type the following to copy the key to the remote server (this will prompt for a password).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-copy-id remote_username@remote_host&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password for Anunna ==&lt;br /&gt;
&lt;br /&gt;
* Create a public key as in Step 1 of the previous section and copy it to Anunna. Note that a public/private key pair needs to be made only once per machine.&lt;br /&gt;
* Similar to step 2 of the previous section, add the public key to the &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys2&amp;lt;/code&amp;gt; file. There is already a &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys&amp;lt;/code&amp;gt; present. You may append the key to this file as an alternative, but take care not to remove content that is already there. The cluster is configured so that passwordless communication will all other nodes is default.&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password using PuTTY ==&lt;br /&gt;
Use pAGEaNT: http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html to generate local keys. You&#039;ll want have a copy of the pubkey in plaintext available.&lt;br /&gt;
&lt;br /&gt;
Make sure to paste that plaintext string into ~/.ssh/authorized_keys in one single line. Chmod the file 600 (so it shows -rw------- in ls -l) and the directory .ssh to 700 (drwx------).&lt;br /&gt;
&lt;br /&gt;
Now PuTTY will login passwordlessly whenever pAGEaNT is running.&lt;br /&gt;
&lt;br /&gt;
Finally, get pAGEaNT to load on startup: http://blog.shvetsov.com/2010/03/making-pageant-automatically-load-keys.html&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password on a Mac ==&lt;br /&gt;
* Create a public key as in Step 1 of the first section and copy it to Anunna.&lt;br /&gt;
* Add the passphrase that you entered above to the keychain on your mac:&lt;br /&gt;
 ssh-add -K /path/to/private/key/file&lt;br /&gt;
&lt;br /&gt;
== Selecting which settings to use ==&lt;br /&gt;
&lt;br /&gt;
To have your SSH client to use certain settings, one can use a config file, at ~/.ssh/config&lt;br /&gt;
&lt;br /&gt;
For example :&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Host *.wurnet.nl *.wur.nl &lt;br /&gt;
    User                    haars001&lt;br /&gt;
    Compression             no&lt;br /&gt;
    RequestTTY              force&lt;br /&gt;
&lt;br /&gt;
Host *&lt;br /&gt;
    Compression             yes&lt;br /&gt;
    Protocol                2&lt;br /&gt;
    ServerAliveInterval     120&lt;br /&gt;
    ServerAliveCountMax     50&lt;br /&gt;
    TCPKeepAlive            no&lt;br /&gt;
    ConnectTimeout          60&lt;br /&gt;
    IdentityFile ~/.ssh/id_ed25519&lt;br /&gt;
    AddKeysToAgent yes&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As the config file is used top to bottom, the connection wur(net).nl servers will be using no compression, but the rest of the servers you might access will.&lt;br /&gt;
More options and settings can be found by using `man ssh_config`&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[log_in_to_Anunna | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2097</id>
		<title>Ssh without password</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Ssh_without_password&amp;diff=2097"/>
		<updated>2021-01-08T10:06:17Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Make ed25519 the default&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Secure shell (ssh) protocols can be configure to work without protocols. This is particularly helpful for machines that are used often. &lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password from a POSIX-compliant terminal ==&lt;br /&gt;
&lt;br /&gt;
=== Step 1: create a public key and copy to remote computer ===&lt;br /&gt;
* Log into a local Linux or MacOSX computer&lt;br /&gt;
* Type the following to generate the ssh key:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-keygen -t ed25519 -a 200 -C $USER@$(hostname)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Accept the default key location by pressing &amp;lt;code&amp;gt;Enter&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Secure permission of your authentication keys by closing permission to your home directory, .ssh directory, and authentication files&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
chmod go-wx $HOME&lt;br /&gt;
chmod 700 $HOME/.ssh&lt;br /&gt;
chmod 600 $HOME/.ssh/*&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Type the following to copy the key to the remote server (this will prompt for a password).&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh-copy-id remote_username@remote_host&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password for Anunna ==&lt;br /&gt;
&lt;br /&gt;
* Create a public key as in Step 1 of the previous section and copy it to Anunna. Note that a public/private key pair needs to be made only once per machine.&lt;br /&gt;
* Similar to step 2 of the previous section, add the public key to the &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys2&amp;lt;/code&amp;gt; file. There is already a &amp;lt;code&amp;gt;$HOME/.ssh/authorized_keys&amp;lt;/code&amp;gt; present. You may append the key to this file as an alternative, but take care not to remove content that is already there. The cluster is configured so that passwordless communication will all other nodes is default.&lt;br /&gt;
&lt;br /&gt;
== Configuring ssh without password using PuTTY ==&lt;br /&gt;
Use pAGEaNT: http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html to generate local keys. You&#039;ll want have a copy of the pubkey in plaintext available.&lt;br /&gt;
&lt;br /&gt;
Make sure to paste that plaintext string into ~/.ssh/authorized_keys in one single line. Chmod the file 600 (so it shows -rw------- in ls -l) and the directory .ssh to 700 (drwx------).&lt;br /&gt;
&lt;br /&gt;
Now PuTTY will login passwordlessly whenever pAGEaNT is running.&lt;br /&gt;
&lt;br /&gt;
Finally, get pAGEaNT to load on startup: http://blog.shvetsov.com/2010/03/making-pageant-automatically-load-keys.html&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[log_in_to_Anunna | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2096</id>
		<title>Running scripts on a fixed timeschedule (cron)</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2096"/>
		<updated>2020-10-20T13:24:09Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Using crontab ==&lt;br /&gt;
&lt;br /&gt;
With crontab you can run jobs on a fixed time schedule.&lt;br /&gt;
 &lt;br /&gt;
This means that you can e.g. download some data every day.&lt;br /&gt;
&lt;br /&gt;
To start an edit use &#039;&#039;&#039;crontab -l&#039;&#039;&#039; , the info on how that file should look can be found by using &#039;&#039;&#039;man 5 crontab&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Be aware of the following :&lt;br /&gt;
&lt;br /&gt;
The scripts will run on the login node, so do not use a lot of resources.&lt;br /&gt;
The crontab entry will be wiped upon reboot !&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2095</id>
		<title>Running scripts on a fixed timeschedule (cron)</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2095"/>
		<updated>2020-10-20T13:21:49Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Using crontab ==&lt;br /&gt;
&lt;br /&gt;
With crontab you can run jobs on a fixed time schedule.&lt;br /&gt;
 &lt;br /&gt;
This means that you can e.g. download some data every day.&lt;br /&gt;
&lt;br /&gt;
To start an edit with &#039;&#039;&#039;crontab -l&#039;&#039;&#039; , the info on that file can be found by using &#039;&#039;&#039;man 5 crontab&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Be aware of the following :&lt;br /&gt;
&lt;br /&gt;
The scripts will run on the login node, so do not use a lot of resources&lt;br /&gt;
The crontab entry will be wiped upon reboot, so if you want to make sure that it is there, it wise to set that up.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2094</id>
		<title>Running scripts on a fixed timeschedule (cron)</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Running_scripts_on_a_fixed_timeschedule_(cron)&amp;diff=2094"/>
		<updated>2020-10-20T13:21:20Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Created page with &amp;quot; == Using crontab ==  With crontab you can run jobs on a fixed timeschedule.   This means that you can e.g. download some data every day.  To start an edit with &amp;#039;&amp;#039;&amp;#039;crontab -l&amp;#039;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Using crontab ==&lt;br /&gt;
&lt;br /&gt;
With crontab you can run jobs on a fixed timeschedule.&lt;br /&gt;
 &lt;br /&gt;
This means that you can e.g. download some data every day.&lt;br /&gt;
&lt;br /&gt;
To start an edit with &#039;&#039;&#039;crontab -l&#039;&#039;&#039; , the info on that file can be found by using &#039;&#039;&#039;man 5 crontab&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Be aware of the following :&lt;br /&gt;
&lt;br /&gt;
The scripts will run on the login node, so do not use a lot of resources&lt;br /&gt;
The crontab entry will be wiped upon reboot, so if you want to make sure that it is there, it wise to set that up.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2093</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2093"/>
		<updated>2020-10-20T12:50:35Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Installation of software by users */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
* [[Running scripts on a fixed timeschedule (cron)]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2092</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2092"/>
		<updated>2020-10-20T08:51:04Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* External links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.wur.nl/en/Value-Creation-Cooperation/Facilities/Wageningen-Shared-Research-Facilities/Our-facilities/Show/High-Performance-Computing-Cluster-HPC-Anunna.htm SRF offers a HPC facilty]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2091</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Main_Page&amp;diff=2091"/>
		<updated>2020-10-20T08:48:40Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Access Policy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anunna is a [http://en.wikipedia.org/wiki/High-performance_computing High Performance Computer] (HPC) infrastructure hosted by [http://www.wageningenur.nl/nl/activiteit/Opening-High-Performance-Computing-cluster-HPC.htm Wageningen University &amp;amp; Research Centre]. It is open for use for all WUR research groups as well as other organizations, including companies, that have collaborative projects with WUR. &lt;br /&gt;
&lt;br /&gt;
= Using Anunna =&lt;br /&gt;
* [[Tariffs | Costs associated with resource usage]]&lt;br /&gt;
&lt;br /&gt;
== Gaining access to Anunna==&lt;br /&gt;
Access to the cluster and file transfer are traditionally done via [http://en.wikipedia.org/wiki/Secure_Shell SSH and SFTP].&lt;br /&gt;
* [[log_in_to_B4F_cluster | Logging into cluster using ssh and file transfer]]&lt;br /&gt;
* [[Services | Alternative access methods, and extra features and services on Anunna]]&lt;br /&gt;
* [[Filesystems | Accessible storage methods on Anunna]]&lt;br /&gt;
&lt;br /&gt;
== Access Policy ==&lt;br /&gt;
[[Access_Policy | Main Article: Access Policy]]&lt;br /&gt;
&lt;br /&gt;
Access needs to be granted actively (by creation of an account on the cluster by FB-IT). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. Note that the use of Anunna is not free of charge. List price of CPU time and storage, and possible discounts on that list price for your organisation, can be retrieved from Shared Research Facilities or FB-IT.&lt;br /&gt;
&lt;br /&gt;
= Events =&lt;br /&gt;
* [[Courses]] that have happened and are happening&lt;br /&gt;
* [[Downtime]] that will affect all users&lt;br /&gt;
* [[Meetings]] that may affect the policies of Anunna&lt;br /&gt;
&lt;br /&gt;
= Other Software =&lt;br /&gt;
&lt;br /&gt;
== Cluster Management Software and Scheduler ==&lt;br /&gt;
Anunna uses Bright Cluster Manager software for overall cluster management, and Slurm as job scheduler.&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[Using_Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[node_usage_graph | Be aware of how much work the cluster is under right now with &#039;node_usage_graph&#039;]]&lt;br /&gt;
* [[SLURM_Compare | Rosetta Stone of Workload Managers]]&lt;br /&gt;
&lt;br /&gt;
== Installation of software by users ==&lt;br /&gt;
&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[Virtual_environment_Python_3.4_or_higher | Setting up and using a virtual environment for Python3.4 or higher ]]&lt;br /&gt;
* [[Installing WRF and WPS]]&lt;br /&gt;
&lt;br /&gt;
== Installed software ==&lt;br /&gt;
&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[ABGC_modules | ABGC specific modules]]&lt;br /&gt;
&lt;br /&gt;
= Useful Notes = &lt;br /&gt;
&lt;br /&gt;
== Being in control of Environment parameters ==&lt;br /&gt;
&lt;br /&gt;
* [[Using_environment_modules | Using environment modules]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Setting_TMPDIR | Set a custom temporary directory location]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== Controlling costs ==&lt;br /&gt;
&lt;br /&gt;
* [[SACCT | using SACCT to see your costs]]&lt;br /&gt;
* [[get_my_bill | using the &amp;quot;get_my_bill&amp;quot; script to estimate costs]]&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
Project Leader of Anunna is Stephen Janssen (Wageningen UR,FB-IT, Service Management). [[User:dawes001 | Gwen Dawes (Wageningen UR, FB-IT, Infrastructure)]] and [[User:bexke002 | Stefan Bexkens (Wageningen UR,FB-IT, Infrastructure)]] are responsible for [[Maintenance_and_Management | Maintenance and Management]] of the cluster.&lt;br /&gt;
&lt;br /&gt;
* [[Roadmap | Ambitions regarding innovation, support and administration of Anunna ]]&lt;br /&gt;
&lt;br /&gt;
= Miscellaneous =&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[History_of_the_Cluster | Historical information on the startup of Anunna]]&lt;br /&gt;
* [[Bioinformatics_tips_tricks_workflows | Bioinformatics tips, tricks, and workflows]]&lt;br /&gt;
* [[Parallel_R_code_on_SLURM | Running parallel R code on SLURM]]&lt;br /&gt;
* [[Convert_between_MediaWiki_and_other_formats | Convert between MediaWiki format and other formats]]&lt;br /&gt;
* [[Manual GitLab | GitLab: Create projects and add scripts]]&lt;br /&gt;
* [[Monitoring_executions | Monitoring job execution]]&lt;br /&gt;
* [[Shared_folders | Working with shared folders in the Lustre file system]]&lt;br /&gt;
&lt;br /&gt;
= See also =&lt;br /&gt;
* [[Maintenance_and_Management | Maintenance and Management]]&lt;br /&gt;
* [[BCData | BCData]]&lt;br /&gt;
* [[Mailinglist | Electronic mail discussion lists]]&lt;br /&gt;
* [[About_ABGC | About ABGC]]&lt;br /&gt;
* [[Computer_cluster | High Performance Computing @ABGC]]&lt;br /&gt;
* [[Lustre_PFS_layout | Lustre Parallel File System layout]]&lt;br /&gt;
&lt;br /&gt;
= External links =&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://www.breed4food.com/en/show/Breed4Food-initiative-reinforces-the-Netherlands-position-as-an-innovative-country-in-animal-breeding-and-genomics.htm Breed4Food programme]&lt;br /&gt;
* [http://www.wageningenur.nl/en/Expertise-Services/Facilities/CATAgroFood-3/CATAgroFood-3/Our-facilities/Show/High-Performance-Computing-Cluster-HPC.htm CATAgroFood offers a HPC facilty]&lt;br /&gt;
* [http://www.cobb-vantress.com Cobb-Vantress homepage]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [https://www.crv4all.nl CRV homepage]&lt;br /&gt;
* [http://www.hendrix-genetics.com Hendrix Genetics homepage]&lt;br /&gt;
* [http://www.topigs.com TOPIGS homepage]&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Scientific_Linux Scientific Linux]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Help:Cheatsheet Help with editing Wiki pages]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2090</id>
		<title>Tariffs</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Tariffs&amp;diff=2090"/>
		<updated>2020-10-20T08:41:54Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Computing: Calculations (cores)==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue&lt;br /&gt;
!CPU core hour&lt;br /&gt;
!GB memory hour&lt;br /&gt;
|-&lt;br /&gt;
|Standard queue&lt;br /&gt;
|€ 0.0150&lt;br /&gt;
|€ 0.0015&lt;br /&gt;
|-&lt;br /&gt;
|High priority queue&lt;br /&gt;
|€ 0.0200&lt;br /&gt;
|€ 0.0020&lt;br /&gt;
|-&lt;br /&gt;
|Low priority queue&lt;br /&gt;
|€ 0.0100&lt;br /&gt;
|€ 0.0010&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Computing: GPU Use==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per device per hour (gpu/hour)&lt;br /&gt;
|-&lt;br /&gt;
|€ 0.3000&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Storage ==&lt;br /&gt;
Tariffs per year per TB&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Lustre Backup&lt;br /&gt;
!Lustre Nobackup&lt;br /&gt;
!Lustre Scratch&lt;br /&gt;
!Home-dir&lt;br /&gt;
!Archive&lt;br /&gt;
|-&lt;br /&gt;
|€ 175&lt;br /&gt;
|€ 125&lt;br /&gt;
|€ 125&lt;br /&gt;
|€ 175&lt;br /&gt;
|€ 125&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Reservations ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Tariff per node per day (node/day)&lt;br /&gt;
|-&lt;br /&gt;
|€ 30&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Notes==&lt;br /&gt;
&lt;br /&gt;
If you are a member of a group with a commitment, then these costs get deducted from that commitment. &lt;br /&gt;
Once you get to around 125% of your commitment we will take action to fix things.&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
&lt;br /&gt;
You are running a job that needs 4 cores, 32G of RAM and runs for 90 minutes in the std quality. To run this, you over-request resources slightly, and execute in a job that requests 4 CPUs, 40G of RAM and with a time limit of 3 hours. Your job terminates early. Thus, your costs are:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
4 * 0.015 * 1.5 = 0.09 EUR for the CPU&lt;br /&gt;
&lt;br /&gt;
40 * 0.0015 * 1.5 = 0.09 EUR for the memory&lt;br /&gt;
&lt;br /&gt;
Total: 0.18 EUR&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2087</id>
		<title>Shared folders</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Shared_folders&amp;diff=2087"/>
		<updated>2020-09-04T07:38:29Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* ACL shared directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Working with shared folders in the Lustre file system ==&lt;br /&gt;
&lt;br /&gt;
If you work in a group or team and use large volumes of data, it is useful to work within a shared space. User can thus share inputs to their models and make their outputs also easily available. This article explains how to do so within the Lustre file system, that presently supports Anunna.&lt;br /&gt;
&lt;br /&gt;
There are two main methods available to you: Access Control List (ACL) access (that you can administer yourself) or group access (that is centrally administered).&lt;br /&gt;
&lt;br /&gt;
== ACL shared directories ==&lt;br /&gt;
You may create a folder that can be accessed by yourself and someone else in the following manner:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
cd /lustre/shared&lt;br /&gt;
mkdir shared_folder&lt;br /&gt;
chmod 700 shared_folder&lt;br /&gt;
setfacl -R -m u:my_id:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_id:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, for each person who you want to have access to this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
setfacl -R -m u:my_friend:rwx shared_folder&lt;br /&gt;
setfacl -R -d -m u:my_friend:rwx shared_folder&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adding users later can be done using the same method, but it might be hard. &lt;br /&gt;
You may have trouble updating ACLs on files that aren&#039;t yours, and you cannot change ownership of files to yourself. &lt;br /&gt;
Each user with files in the folder will need to update their ACLs appropriately themselves, or you can contact your sysadmins to assist.&lt;br /&gt;
&lt;br /&gt;
== Group shared directories ==&lt;br /&gt;
&lt;br /&gt;
Users access Anunna cluster with their WUR-wide account. This means that all the membership information is also available on Anunna. To check of which groups is your user a member of, use the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups &amp;lt;username&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can result in a rather long list, reflecting permissions in the overall WUR systems. Within these groups you must then identify the one that is closer to match the team or group with which you wish to collaborate.&lt;br /&gt;
&lt;br /&gt;
For instance, if I wish to work together with colleagues at ISRIC, I can search within my groups an appropriate match:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;groups duque004 | grep isric&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In my case the group des-isric-users looked appropriate. Then next step is to confirm if the other users in my team are also members of the group.&lt;br /&gt;
&lt;br /&gt;
=== Creating a shared folder with correct permissions ===&lt;br /&gt;
&lt;br /&gt;
The Lustre file system is accessible in the &amp;lt;code&amp;gt;/lustre&amp;lt;/code&amp;gt; folder and then divided into the &amp;lt;code&amp;gt;/backup&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;/nobackup&amp;lt;/code&amp;gt; sections (corresponding to the different usage plans). Inside each of these folders there is a sub-folder named &amp;lt;code&amp;gt;SHARED&amp;lt;/code&amp;gt; in which users are to create their own assets.&lt;br /&gt;
&lt;br /&gt;
You start by creating a folder in this space; it is probably better if it matches the name of your group or team, e.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;mkdir /lustre/nobackup/SHARED/myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Or in alternative:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;cd /lustre/nobackup/SHARED&lt;br /&gt;
&lt;br /&gt;
mkdir myTeamWorkspace&amp;lt;/code&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Setting permissions ===&lt;br /&gt;
&lt;br /&gt;
Three basic steps are involved in stepping permissions correctly:&lt;br /&gt;
&lt;br /&gt;
1. Pass the ownership of the group to the team. In the example below it is applied recursively to all sub-folder and files that may exist:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chgrp -R my-team-group myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Concede read/write permissions to the group. This allows other members of the group to read and write in the shared folder. If you wish other team members to only read from the folder then remove the &amp;lt;code&amp;gt;w&amp;lt;/code&amp;gt; character from the &amp;lt;code&amp;gt;+rw&amp;lt;/code&amp;gt; bit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Set default ownership within the group. This guarantees that any new files or folders created within the shared folder are owned by default owned by your team group:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R g+s myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case the contents of the shared are sensitive or private, and should be accessed by your team, you can block access from any other users with the following command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;chmod -R o-rw myTeamWorkspace&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/an-introduction-to-linux-permissions An Introduction to Linux Permissions]&lt;br /&gt;
&lt;br /&gt;
[https://www.linode.com/docs/tools-reference/linux-users-and-groups/ Linux Users and Groups]&lt;br /&gt;
&lt;br /&gt;
[https://www.digitalocean.com/community/tutorials/linux-permissions-basics-and-how-to-use-umask-on-a-vps#types-of-permissions Linux Permissions Basics and How to Use Umask on a VPS]&lt;br /&gt;
&lt;br /&gt;
[http://www.yolinux.com/TUTORIALS/LinuxTutorialManagingGroups.html Linux Tutorial - Managing Group Access on Linux and UNIX]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_conda_to_install_a_new_kernel_into_your_notebook&amp;diff=2086</id>
		<title>Using conda to install a new kernel into your notebook</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_conda_to_install_a_new_kernel_into_your_notebook&amp;diff=2086"/>
		<updated>2020-07-10T14:42:51Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Created page with &amp;quot;As you can read here, you can add a new kernel to your Jupyter manually.  If you use conda, the following steps...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;As you can read [[Setting_up_Python_virtualenv#Virtualenv_kernels_in_Jupyter|here]], you can add a new kernel to your Jupyter manually.&lt;br /&gt;
&lt;br /&gt;
If you use conda, the following steps can be used to create the necessary files, and share your environment :&lt;br /&gt;
&lt;br /&gt;
Install:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
conda create -y -n kernel_test python=3 ipykernel &amp;amp;&amp;amp; conda activate kernel_test&lt;br /&gt;
python -m ipykernel install --user --name kernel_test&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Cleanup (make sure you have activated the right enviroment):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 jupyter kernelspec uninstall kernel_test&lt;br /&gt;
 conda deactivate &amp;amp;&amp;amp; conda remove -y -n kernel_test&amp;lt;/nowiki&amp;gt;&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2085</id>
		<title>Setting up Python virtualenv</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Setting_up_Python_virtualenv&amp;diff=2085"/>
		<updated>2020-07-10T14:36:54Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Add link to page describing conda alternative&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;With many Python packages available, which are often in conflict or requiring different versions depending on application, installing and controlling packages and versions is not always easy. In addition, so many packages are often used only occasionally, that it is questionable whether a system administrator of a centralized server system or a High Performance Compute (HPC) infrastructure can be expected to resolve all issues posed by users of the infrastructure. Even on a local system with full administrative rights managing versions, dependencies, and package collisions is often very difficult. The solution is to use a virtual environment, in which a specific set of packages can then be installed. As many different virtual environments can be created, and used side-by-side, as is necessary. &lt;br /&gt;
&lt;br /&gt;
NOTE: as of Python 3.3 virtual environment support is built-in. See this page for an [[virtual_environment_Python_3.4_or_higher | alternative set-up of your virtual environment if using Python 3.4 or higher]].&lt;br /&gt;
&lt;br /&gt;
== Creating a new virtual environment ==&lt;br /&gt;
It is assumed that the appropriate &amp;lt;code&amp;gt;virtualenv&amp;lt;/code&amp;gt; executable for the Python version of choice is installed. A new virtual environment, in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt; is created like so:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module load python/my-favourite-version (e.g. 2.7.12)&lt;br /&gt;
virtualenv newenv&lt;br /&gt;
OR&lt;br /&gt;
pyvenv newenv (For versions &amp;gt;3.4)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When the new environment is created, one will see a message similar to this:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  New python executable in newenv/bin/python3&lt;br /&gt;
  Also creating executable in newenv/bin/python&lt;br /&gt;
  Installing Setuptools.........................................................................done.&lt;br /&gt;
  Installing Pip................................................................................done.&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Activating a virtual environment ==&lt;br /&gt;
Once the environment is created, each time the environment needs to be activated, the following command needs to be issued:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source newenv/bin/activate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This assumes that the folder that contains the virtual environment documents (in this case called &amp;lt;code&amp;gt;newenv&amp;lt;/code&amp;gt;), is in the present working directory.&lt;br /&gt;
When working on the virtual environment, the virtual environment name will be between brackets in front of the &amp;lt;code&amp;gt;user-host-prompt&amp;lt;/code&amp;gt; string.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;  (newenv)user@host:~$&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Installing modules on the virtual environment ==&lt;br /&gt;
Installing modules is the same as usual. The difference is that modules are in &amp;lt;code&amp;gt;/path/to/virtenv/lib&amp;lt;/code&amp;gt;, which may be living somewhere on your home directory. When working from the virtual environment, the default &amp;lt;code&amp;gt;pip&amp;lt;/code&amp;gt; will belong to the python version that is currently active. This means that the executable in &amp;lt;code&amp;gt;/path/to/virtenv/bin&amp;lt;/code&amp;gt; are in fact the first in the &amp;lt;code&amp;gt;$PATH&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
pip install numpy&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Similarly, installing packages from source works exactly the same as usual.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
python setup.py install&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== deactivating a virtual environment ==&lt;br /&gt;
Quitting a virtual environment can be done by using the command &amp;lt;code&amp;gt;deactivate&amp;lt;/code&amp;gt;, which was loaded using the &amp;lt;code&amp;gt;source&amp;lt;/code&amp;gt; command upon activating the virtual environment.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Virtualenv kernels in Jupyter ==&lt;br /&gt;
Want your own virtualenv kernel in a notebook? This can be done by making your own kernel specifications:&lt;br /&gt;
&lt;br /&gt;
(an alternative way to the manual way (using conda) is described [[Using conda to install a new kernel into your notebook|here ]])&lt;br /&gt;
&lt;br /&gt;
* Make sure you have the ipykernel module in your venv. Activate it and pip install it:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;source ~/path/to/my/virtualenv/bin/activate &amp;amp;&amp;amp; pip install ipykernel&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Create the following directory path in your homedir if it doesn&#039;t already exist:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir -p ~/.local/share/jupyter/kernels/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Think of a nice descriptive name that doesn&#039;t clash with one of the already present kernels. I&#039;ll use &#039;testing&#039;. Create this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;mkdir ~/.local/share/jupyter/kernels/testing/&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Add this file to this folder:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;vi ~/.local/share/jupyter/kernels/testing/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/home/myhome/path/to/my/virtualenv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;testing&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
* Reload Jupyterhub page. testing should now exist in your kernels list.&lt;br /&gt;
&lt;br /&gt;
You can do more complex things with this, such as construct your own Spark environment. This relies on having the module findspark installed:&lt;br /&gt;
 &amp;lt;nowiki&amp;gt; vi ~/.local/share/jupyter/kernels/mysparkkernel/kernel.json &lt;br /&gt;
{&lt;br /&gt;
 &amp;quot;language&amp;quot;: &amp;quot;python&amp;quot;,&lt;br /&gt;
 &amp;quot;env&amp;quot;: {&lt;br /&gt;
   &amp;quot;SPARK_HOME&amp;quot;:&lt;br /&gt;
     &amp;quot;/cm/shared/apps/spark/my-spark-version&amp;quot;&lt;br /&gt;
 },&lt;br /&gt;
 &amp;quot;argv&amp;quot;: [&lt;br /&gt;
  &amp;quot;/home/myhome/my/spark/venv/bin/python&amp;quot;,&lt;br /&gt;
  &amp;quot;-m&amp;quot;,&lt;br /&gt;
  &amp;quot;ipykernel&amp;quot;,&lt;br /&gt;
  &amp;quot;-c&amp;quot;, &amp;quot;import findspark; findspark.init()&amp;quot;,&lt;br /&gt;
  &amp;quot;-f&amp;quot;,&lt;br /&gt;
  &amp;quot;{connection_file}&amp;quot;&lt;br /&gt;
 ],&lt;br /&gt;
 &amp;quot;display_name&amp;quot;: &amp;quot;My Spark kernel&amp;quot;&lt;br /&gt;
}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
(You&#039;ll want to make sure your spark cluster has the same environment - start it after activating this venv inside your sbatch script)&lt;br /&gt;
&lt;br /&gt;
== Make IPython work under virtualenv ==&lt;br /&gt;
IPython may not work initially under a virtual environment. It may produce an error message like below:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;    File &amp;quot;/usr/bin/ipython&amp;quot;, line 11&lt;br /&gt;
    print &amp;quot;Could not start qtconsole. Please install ipython-qtconsole&amp;quot;&lt;br /&gt;
                                                                      ^&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be resolved by adding a soft link with the name &amp;lt;code&amp;gt;ipython&amp;lt;/code&amp;gt; to the &amp;lt;code&amp;gt;bin&amp;lt;/code&amp;gt; directory in the virtual environment folder.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ln -s /path/to/virtenv/bin/ipython3 /path/to/virtenv/bin/ipython&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [https://pypi.python.org/pypi/virtualenv Python3 documentation for virtualenv]&lt;br /&gt;
* [http://cemcfarland.wordpress.com/2013/03/09/getting-ipython3-working-inside-your-virtualenv/ Solving the IPython hickup under virtual environment]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_environment_modules&amp;diff=2084</id>
		<title>Using environment modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_environment_modules&amp;diff=2084"/>
		<updated>2020-07-10T14:18:09Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Environment Modules ===&lt;br /&gt;
[http://modules.sourceforge.net/ Environment modules] are a simple way to allow multiple potentially clashing programs to coexist on a large shared machine such as an HPC. It allows a user to specify exactly which programs are loaded, and even which version of each program, whilst simultaneously allowing the administrator the ability to automatically configure the appropriate environment variables for the system itself.&lt;br /&gt;
&lt;br /&gt;
== Viewing Modules ==&lt;br /&gt;
Upon logging in to Anunna, you should find that when you do:&lt;br /&gt;
  module list&lt;br /&gt;
&lt;br /&gt;
You will see something like this:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
-bash-4.1$ module list&lt;br /&gt;
Currently Loaded Modulefiles:&lt;br /&gt;
  1) shared        2) slurm/2.5.7&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a list of all loaded modules in your shell session. To get a list of all available modules, simply&lt;br /&gt;
   module available&lt;br /&gt;
&lt;br /&gt;
And this will show you the (very exhaustive) list of modules on Anunna:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source  lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
-bash-4.1$ module avail&lt;br /&gt;
&lt;br /&gt;
---------------------------- /cm/shared/modulefiles ----------------------------&lt;br /&gt;
acml/gcc/64/5.3.1                     netcdf/gcc/64/4.1.3&lt;br /&gt;
acml/gcc/fma4/5.3.1                   netcdf/gcc/64/4.3.0&lt;br /&gt;
acml/gcc/mp/64/5.3.1                  netcdf/gcc/64/4.3.2&lt;br /&gt;
acml/gcc/mp/fma4/5.3.1                netcdf/gcc/64/4.3.3&lt;br /&gt;
acml/gcc-int64/64/5.3.1               netcdf/gcc/64/4.3.3.1&lt;br /&gt;
acml/gcc-int64/fma4/5.3.1             netcdf/intel/64/4.1.3&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let&#039;s look at each of these module names. Each module is named for the application it provides, plus a subfolder of which compiler it was compiled with (if compiled), the number of address bits or options (if compiled), and the version.&lt;br /&gt;
&lt;br /&gt;
If you want to see a list for a specific module, you can&lt;br /&gt;
  module avail netcdf&lt;br /&gt;
&lt;br /&gt;
And the complete list of versions will be shown.&lt;br /&gt;
&lt;br /&gt;
== Loading Modules ==&lt;br /&gt;
To load a module, simply&lt;br /&gt;
  module load foo&lt;br /&gt;
&lt;br /&gt;
And the most recent version of module foo will automatically be loaded. If foo is compiled, it will automatically select the gcc version. If you want to specify a certain version, then&lt;br /&gt;
  module load foo/gcc/64/1.0.0&lt;br /&gt;
&lt;br /&gt;
Will load foo version 1, compiled with gcc. Be advised that this may not always work, as some modules are not compatible with each other, but a message will be shown if this is the case. Additionally, some modules will automatically load other modules with them for them to operate.&lt;br /&gt;
&lt;br /&gt;
== Unloading Modules ==&lt;br /&gt;
If you want to remove a module that you&#039;ve loaded, then&lt;br /&gt;
  module unload foo&lt;br /&gt;
&lt;br /&gt;
Will remove all module foo&#039;s loaded.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This script will not run at all in the default 2.4 version of Python on the cluster. In order for this script to run you must use Python3. To do this, first list all versions of Python:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
-bash-4.1$ module avail python&lt;br /&gt;
&lt;br /&gt;
---------------------------- /cm/shared/modulefiles ----------------------------&lt;br /&gt;
python/2.7.6 python/3.3.3 python/3.4.2&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then you can load the specific version you need:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
Now you have access to the executable python3.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Environment_Modules | Environment Modules]]&lt;br /&gt;
* [[Control_R_environment_using_modules | Control R environment using modules]]&lt;br /&gt;
* [[Create_shortcut_log-in_command | Create a shortcut for the ssh log-in command]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* http://modules.sourceforge.net &lt;br /&gt;
* https://modules.readthedocs.io/en/latest/ (documentation)&lt;br /&gt;
* http://www.admin-magazine.com/HPC/Articles/Environment-Modules&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Access_Policy&amp;diff=2083</id>
		<title>Access Policy</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Access_Policy&amp;diff=2083"/>
		<updated>2020-06-30T12:46:55Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Access policy is still a work in progress. In principle, all staff and students of the five main partners will have access to Anunna. Access needs to be granted actively (by creation of an account on the cluster by FB-IT reagdring non WUR accounts). Use of resources is limited by the scheduler. Depending on availability of queues (&#039;partitions&#039;) granted to a user, priority to the system&#039;s resources is regulated. &lt;br /&gt;
&lt;br /&gt;
== Contact Persons ==&lt;br /&gt;
A request to access the cluster needs to be directed to one of the following persons (please refer to appropriate partner):&lt;br /&gt;
&lt;br /&gt;
=== WUR ===&lt;br /&gt;
==== ESG ====&lt;br /&gt;
* Ronald Hutjes&lt;br /&gt;
* Reinder Ronda&lt;br /&gt;
&lt;br /&gt;
==== Bioinformatics ====&lt;br /&gt;
* Dick de Ridder&lt;br /&gt;
&lt;br /&gt;
==== PRI ====&lt;br /&gt;
* Sara Diaz Trivino&lt;br /&gt;
&lt;br /&gt;
==== ABGC ====&lt;br /&gt;
===== Animal Breeding and Genetics =====&lt;br /&gt;
* [[User:Hulze001 |Alex Hulzebosch]]&lt;br /&gt;
* [[User:Megen002 | Hendrik-Jan Megens]]&lt;br /&gt;
===== Wageningen Livestock Research =====&lt;br /&gt;
* Mario Calus&lt;br /&gt;
* Ina Hulsegge&lt;br /&gt;
&lt;br /&gt;
=== Cobb-Vantress ===&lt;br /&gt;
* Wes Barris&lt;br /&gt;
* Jun Chen&lt;br /&gt;
=== CRV ===&lt;br /&gt;
* Frido Hamoen&lt;br /&gt;
* Chris Schrooten&lt;br /&gt;
=== Hendrix Genetics === &lt;br /&gt;
* Ton Dings&lt;br /&gt;
* Abe Huisman&lt;br /&gt;
* Addie Vereijken&lt;br /&gt;
=== Topigs ===&lt;br /&gt;
* [[User:dongen01 | Henk van Dongen]]&lt;br /&gt;
* Egiel Hanenbarg&lt;br /&gt;
* Naomi Duijvensteijn&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
[[Main_Page | Main page]]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2082</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2082"/>
		<updated>2020-06-08T09:21:37Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Samba/CIFS based protocols */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log in to [[Anunna | Anunna]] using ssh (default port tcp 22). The address of the login server is:&lt;br /&gt;
  login.anunna.wur.nl&lt;br /&gt;
&lt;br /&gt;
You will be automatically redirected to the currently valid login server. To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Windows an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the Login server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[Using_Slurm  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@login.anunna.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [login.anunna.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
In a complete emergency, it is then possible to log on to any of the worker nodes via the login node. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. This is not normally allowed - be aware that running tasks outside of SLURM is prohibited, but so far there has not been any serious abuse of this. This is provided to allow you to get a little more insight in what your job is doing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, it is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
=== Samba/CIFS based protocols ===&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
You can enter these in the location bar of File Explorer.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Using _Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2081</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=2081"/>
		<updated>2020-06-08T09:19:08Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Samba/CIFS based protocols */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log in to [[Anunna | Anunna]] using ssh (default port tcp 22). The address of the login server is:&lt;br /&gt;
  login.anunna.wur.nl&lt;br /&gt;
&lt;br /&gt;
You will be automatically redirected to the currently valid login server. To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Windows an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the Login server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[Using_Slurm  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@login.anunna.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [login.anunna.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
In a complete emergency, it is then possible to log on to any of the worker nodes via the login node. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. This is not normally allowed - be aware that running tasks outside of SLURM is prohibited, but so far there has not been any serious abuse of this. This is provided to allow you to get a little more insight in what your job is doing.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, it is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@login.anunna.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (login.anunna.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, password and server type (Unix, see Site Manager;Advanced), files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on Anunna. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
=== Samba/CIFS based protocols ===&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. &lt;br /&gt;
&lt;br /&gt;
There are two mount points that are available &lt;br /&gt;
# your home folder ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\[username]&#039;&#039;&#039; ) &lt;br /&gt;
# the Lustre mount ( &#039;&#039;&#039;\\cifs.anunna.wur.nl\lustre&#039;&#039;&#039; )&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Using _Slurm | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Node_usage_graph&amp;diff=1754</id>
		<title>Node usage graph</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Node_usage_graph&amp;diff=1754"/>
		<updated>2017-02-09T13:37:04Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a graphing tool that uses elements directly from sacct to display information about the current cluster usage, node_usage_graph (located at /cm/shared/apps/accounting/node_usage_graph ).&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
[user@nfs01 ~]# /cm/shared/apps/accounting/node_usage_graph &lt;br /&gt;
node:   |0%                                                                                                     100%|&lt;br /&gt;
fat001: ############################################                                                                 &lt;br /&gt;
fat002: ######################################################                                                       &lt;br /&gt;
node001:#############################################################################################################&lt;br /&gt;
node002:#############################################################################################################&lt;br /&gt;
node003:########################################################################################                     &lt;br /&gt;
node004:#############################################################################################################&lt;br /&gt;
node005:#############################################################################################################&lt;br /&gt;
node006:#############################################################                                                &lt;br /&gt;
node007:######################################################################################################       &lt;br /&gt;
node008:###########################                                                                                  &lt;br /&gt;
node009:###########################                                                                                  &lt;br /&gt;
node010:########################################                                                                     &lt;br /&gt;
node011:##########################################################################                                   &lt;br /&gt;
node012:######################################################################################################       &lt;br /&gt;
node013:########################################################################################                     &lt;br /&gt;
node014:######                                                                                                       &lt;br /&gt;
node015:#############################################################################################################&lt;br /&gt;
node016:###############################################################################################              &lt;br /&gt;
node017:########################################                                                                     &lt;br /&gt;
node018:#############################################################################################################&lt;br /&gt;
node019:###########################                                                                                  &lt;br /&gt;
node020:###############################################################################################              &lt;br /&gt;
node021:###########################                                                                                  &lt;br /&gt;
node022:###############################################                                                              &lt;br /&gt;
node023:########################################################################################                     &lt;br /&gt;
node024:########################################                                                                     &lt;br /&gt;
node025:###############################################                                                              &lt;br /&gt;
node026:####################################################################                                         &lt;br /&gt;
node027:######################################################################################################       &lt;br /&gt;
node028:########################################################################################                     &lt;br /&gt;
node029:#############################################################################################################&lt;br /&gt;
node030:###########################                                                                                  &lt;br /&gt;
node031:######################################################                                                       &lt;br /&gt;
node032:#############                                                                                                &lt;br /&gt;
node033:#############################################################################################################&lt;br /&gt;
node034:#############                                                                                                &lt;br /&gt;
node035:###########################                                                                                  &lt;br /&gt;
node036:                                                                                                             &lt;br /&gt;
node037:                                                                                                             &lt;br /&gt;
node038:###########################                                                                                  &lt;br /&gt;
node039:###########################                                                                                  &lt;br /&gt;
node040:######################################################################################################       &lt;br /&gt;
node041:###########################                                                                                  &lt;br /&gt;
node042:###########################                                                                                  &lt;br /&gt;
node049:#############################################################                                                &lt;br /&gt;
node050:######################################################                                                       &lt;br /&gt;
node051:######################################################################################################       &lt;br /&gt;
node052:#############################################################################################################&lt;br /&gt;
node053:###############################################                                                              &lt;br /&gt;
node054:#############################################################################################################&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This gives an overview of the current per-node resource usage. It cannot however give you an indication of how much the queue is right now for any node.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=893</id>
		<title>User:Haars001</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=893"/>
		<updated>2013-12-19T19:49:33Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Assistant Research at Wageningen UR, Plant Research International, BU Bioscience, Applied Bioinformatics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Jan van Haarst ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Haarst,_Jan_van_-_S_-_Portrait_-_1120x1654px_-_GA--20131010-ND7_2171.jpg|200px|right]]&lt;br /&gt;
=== Assistant Research at Wageningen UR, Plant Research International, BU Bioscience, Applied Bioinformatics ===&lt;br /&gt;
* Profile on [https://www.vcard.wur.nl/Views/Profile/View.aspx?id=3014 We@WUR]&lt;br /&gt;
* Profile on [https://www.linkedin.com/in/jvhaarst LinkedIn]&lt;br /&gt;
* Profile on [http://scholar.google.com/citations?user=eoB6EPcAAAAJ Google Scholar]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
My main line of work revolves around de novo assembly projects. Mainly plants, but every now and then something else comes along.&lt;br /&gt;
Besides that, I need to tinker on our own infrastructure, and help colleagues from our BU Bioscience  with their bioinformatics analyses.&lt;br /&gt;
These range from degradome analysis up to BiSulfite Sequencing analysis, and all those short things that pop up every week.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=File:Haarst,_Jan_van_-_S_-_Portrait_-_1120x1654px_-_GA--20131010-ND7_2171.jpg&amp;diff=892</id>
		<title>File:Haarst, Jan van - S - Portrait - 1120x1654px - GA--20131010-ND7 2171.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=File:Haarst,_Jan_van_-_S_-_Portrait_-_1120x1654px_-_GA--20131010-ND7_2171.jpg&amp;diff=892"/>
		<updated>2013-12-19T19:45:53Z</updated>

		<summary type="html">&lt;p&gt;Haars001: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=891</id>
		<title>User:Haars001</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=891"/>
		<updated>2013-12-19T19:45:01Z</updated>

		<summary type="html">&lt;p&gt;Haars001: /* Jan van Haarst */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Jan van Haarst ==&lt;br /&gt;
&lt;br /&gt;
=== Assistant Research at Wageningen UR, Plant Research International, BU Bioscience, Applied Bioinformatics ===&lt;br /&gt;
&lt;br /&gt;
* Profile on [https://www.vcard.wur.nl/Views/Profile/View.aspx?id=3014 We@WUR]&lt;br /&gt;
* Profile on [https://www.linkedin.com/in/jvhaarst LinkedIn]&lt;br /&gt;
* Profile on [http://scholar.google.com/citations?user=eoB6EPcAAAAJ Google Scholar]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
My main line of work revolves around de novo assembly projects. Mainly plants, but every now and then something else comes along.&lt;br /&gt;
Besides that, I need to tinker on our own infrastructure, and help colleagues from our BU Bioscience  with their bioinformatics analyses.&lt;br /&gt;
These range from degradome analysis up to BiSulfite Sequencing analysis, and all those short things that pop up every week.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=890</id>
		<title>User:Haars001</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=User:Haars001&amp;diff=890"/>
		<updated>2013-12-19T19:44:16Z</updated>

		<summary type="html">&lt;p&gt;Haars001: Created page with &amp;quot;== Jan van Haarst ==  * Profile on [https://www.vcard.wur.nl/Views/Profile/View.aspx?id=3014 We@WUR] * Profile on [https://www.linkedin.com/in/jvhaarst LinkedIn] * Profile on ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Jan van Haarst ==&lt;br /&gt;
&lt;br /&gt;
* Profile on [https://www.vcard.wur.nl/Views/Profile/View.aspx?id=3014 We@WUR]&lt;br /&gt;
* Profile on [https://www.linkedin.com/in/jvhaarst LinkedIn]&lt;br /&gt;
* Profile on [http://scholar.google.com/citations?user=eoB6EPcAAAAJ Google Scholar]&lt;br /&gt;
&lt;br /&gt;
Senior Researcher and lecturer at Wageningen University, Animal Breeding and Genomics Centre. &lt;br /&gt;
Assistant Research at Wageningen UR, Plant Research International, BU Bioscience, Applied Bioinformatics&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
My main line of work revolves around de nov assembly projects. Mainly plants, but every now and then something else comes along.&lt;br /&gt;
Besides that, I need to tinker on our own infrastructure, and help colleagues from our BU Bioscience  with their bioinformatics analyses.&lt;br /&gt;
These range from degradome analysis up to BiSulfite Sequencing analysis, and all those short things that pop up every week.&lt;/div&gt;</summary>
		<author><name>Haars001</name></author>
	</entry>
</feed>