<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Hulze001</id>
	<title>HPCwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.anunna.wur.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Hulze001"/>
	<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php/Special:Contributions/Hulze001"/>
	<updated>2026-04-18T04:35:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=ABGC_modules&amp;diff=1364</id>
		<title>ABGC modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=ABGC_modules&amp;diff=1364"/>
		<updated>2015-01-23T15:20:12Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* Modules available for ABGC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All ABGC modules can be found in:&lt;br /&gt;
&lt;br /&gt;
  /cm/shared/apps/WUR/ABGC/modulefiles&lt;br /&gt;
&lt;br /&gt;
== Modules available for ABGC ==&lt;br /&gt;
&lt;br /&gt;
* [[asreml_3.0 | asreml/3.0fl-64]]&lt;br /&gt;
* [[asreml_4.0 | asreml/4.0kr]]&lt;br /&gt;
* [[asreml_4.1 | asreml/4.1.0]]&lt;br /&gt;
* [[Perl5.10_WUR_module | Perl/5.10.1_wur]]&lt;br /&gt;
* [[R3.0.2_WUR_module | R/3.0.2_wur]]&lt;br /&gt;
&lt;br /&gt;
== Adding a custom module directory to your environment ==&lt;br /&gt;
To allow the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; program to find the custom module directory, the location of that directory has to be added to &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
export MODULEPATH=$MODULEPATH:/cm/shared/apps/WUR/ABGC/modulefiles&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This can be made permanent by adding this line of code to the &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; file in the root of your home folder. To then load the modified &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable you have to load  &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; again:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source .bash_profile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This needs to be done only for terminals that are already open. Next time you login, &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; will be loaded automatically.&lt;br /&gt;
&lt;br /&gt;
You can check if the modules are found.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should give output that includes something similar to this:&lt;br /&gt;
&lt;br /&gt;
  ----------------------------------- /cm/shared/apps/WUR/ABGC/modulefiles -----------------------------------&lt;br /&gt;
  bwa/0.5.9  bwa/0.7.5a&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Main_Page#Using_the_B4F_Cluster | Using the B4F Cluster]]&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1352</id>
		<title>Using Slurm</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Using_Slurm&amp;diff=1352"/>
		<updated>2014-09-19T09:37:27Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* Queues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The resource allocation / scheduling software on the B4F Cluster is [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management SLURM]: &#039;&#039;&#039;S&#039;&#039;&#039;imple &#039;&#039;&#039;L&#039;&#039;&#039;inux &#039;&#039;&#039;U&#039;&#039;&#039;tility for &#039;&#039;&#039;R&#039;&#039;&#039;esource &#039;&#039;&#039;M&#039;&#039;&#039;anagement.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Queues and defaults ==&lt;br /&gt;
&lt;br /&gt;
=== Queues ===&lt;br /&gt;
Every organization has 3 queues (in slurm called partitions) : a high, a standard and a low priority queue.&amp;lt;br&amp;gt;&lt;br /&gt;
The High queue provides the highest priority to jobs (20) then the standard queue (10). In the low priority queue (0)&amp;lt;br&amp;gt;&lt;br /&gt;
jobs will be resubmitted if a job with higer priority needs cluster resources and those resoruces are occupied by a Low queue jobs.&lt;br /&gt;
To find out which queues your account has been authorized for, type sinfo:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
PARTITION       AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
ABGC_High      up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_High      up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_High      up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Std       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Std       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Std       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
ABGC_Low       up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
ABGC_Low       up   infinite      6    mix fat[001-002],node[002-005]&lt;br /&gt;
ABGC_Low       up   infinite     44   idle node[001,006-042,049-054]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Defaults ===&lt;br /&gt;
There is no default queue, so you need to specify which queue to use when submitting a job.&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;The default run time for a job is 1 hour!&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Default memory limit is 100MB per node!&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Submitting jobs: sbatch ==&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
Consider this simple python3 script that should calculate Pi to 1 million digits:&lt;br /&gt;
&amp;lt;source lang=&#039;python&#039;&amp;gt;&lt;br /&gt;
from decimal import *&lt;br /&gt;
D=Decimal&lt;br /&gt;
getcontext().prec=10000000&lt;br /&gt;
p=sum(D(1)/16**k*(D(4)/(8*k+1)-D(2)/(8*k+4)-D(1)/(8*k+5)-D(1)/(8*k+6))for k in range(411))&lt;br /&gt;
print(str(p)[:10000002])&lt;br /&gt;
&amp;lt;/source&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Loading modules ===&lt;br /&gt;
In order for this script to run, the first thing that is needed is that Python3, which is not the default Python version on the cluster, is load into your environment. Availability of (different versions of) software can be checked by the following command:&lt;br /&gt;
  module avail&lt;br /&gt;
&lt;br /&gt;
In the list you should note that python3 is indeed available to be loaded, which then can be loaded with the following command:&lt;br /&gt;
  module load python/3.3.3&lt;br /&gt;
&lt;br /&gt;
=== Batch script ===&lt;br /&gt;
The following shell/slurm script can then be used to schedule the job using the sbatch command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
time python3 calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Explanation of used SBATCH parameters:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --account=773320000&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Charge resources used by this job to specified account. The account is an arbitrary string. The account name may be changed after job submission using the scontrol command. For WUR users a projectnumber or KTP number would be advisable.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --time=1200&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
A time limit of zero requests that no time limit be imposed. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. So in this example the job will run for a maximum of 1200 minutes.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem=2048&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
SLURM imposes a memory limit on each job. By default, it is deliberately relatively small — 100 MB per node. If your job uses more than that, you’ll get an error that your job Exceeded job memory limit. To set a larger limit, add to your job submission: &lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mem X&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where X is the maximum amount of memory your job will use per node, in MB. The larger your working data set, the larger this needs to be, but the smaller the number the easier it is for the scheduler to find a place to run your job. To determine an appropriate value, start relatively large (job slots on average have about 4000 MB per core, but that’s much larger than needed for most jobs) and then use sacct to look at how much your job is actually using or used:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
$ sacct -o MaxRSS -j JOBID&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
where JOBID is the one you’re interested in. The number is in KB, so divide by 1024 to get a rough idea of what to use with –mem (set it to something a little larger than that, since you’re defining a hard upper limit). If your job completed long in the past you may have to tell sacct to look further back in time by adding a start time with -S YYYY-MM-DD. Note that for parallel jobs spanning multiple nodes, this is the maximum memory used on any one node; if you’re not setting an even distribution of tasks per node (e.g. with –ntasks-per-node), the same job could have very different values when run at different times.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --ntasks=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
sbatch does not launch tasks, it requests an allocation of resources and submits a batch script. This option advises the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources. The default is one task per node, but note that the --cpus-per-task option will change this default.&lt;br /&gt;
&lt;br /&gt;
When requesting multiple tasks, you may or may not want the job to be partitioned among multiple nodes. You can specify the minimum number of nodes using the &amp;lt;code&amp;gt;-N&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--node&amp;lt;/code&amp;gt; flag. If you provide only one number, this will be minimum and maximum at the same time. For instance:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should force your job to be scheduled to a single node.&lt;br /&gt;
&lt;br /&gt;
Because the cluster has a hybrid configuration, i.e. normal and fat nodes, it may be prudent to schedule your job specifically for one or the other node type, depending for instance on memory requirements. This can be done by using the &amp;lt;code&amp;gt;-C&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;--constraints&amp;lt;/code&amp;gt; flag.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --constraint=normalmem&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The example above will result in jobs being scheduled to the regular compute nodes. By using &amp;lt;code&amp;gt;largemem&amp;lt;/code&amp;gt; as option the job will specifically be scheduled to one of the fat nodes. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --output=output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard output directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --error=error_output_%j.txt&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Instruct SLURM to connect the batch script&#039;s standard error directly to the file name specified in the &amp;quot;filename pattern&amp;quot;. By default both standard output and standard error are directed to a file of the name &amp;quot;slurm-%j.out&amp;quot;, where the &amp;quot;%j&amp;quot; is replaced with the job allocation number. See the --input option for filename specification options.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --job-name=calc_pi.py&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Specify a name for the job allocation. The specified name will appear along with the job id number when querying running jobs on the system. The default is the name of the batch script, or just &amp;quot;sbatch&amp;quot; if the script is read on sbatch&#039;s standard input.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --partition=ABGC_Std&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Request a specific partition for the resource allocation. It is prefered to use your organizations partition.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-type=ALL&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Notify user by email when certain event types occur. Valid type values are BEGIN, END, FAIL, REQUEUE, and ALL (any state change). The user to be notified is indicated with --mail-user.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
#SBATCH --mail-user=email@org.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Email address to use.&lt;br /&gt;
&lt;br /&gt;
=== Submitting ===&lt;br /&gt;
The script, assuming it was named &#039;run_calc_pi.sh&#039;, can then be posted using the following command:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sbatch run_calc_pi.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Submitting multiple jobs ===&lt;br /&gt;
Assuming there are 10 job scripts, name runscript_1.sh through runscript_10.sh, all these scripts can be submitted using the following line of shell code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;for i in `seq 1 10`; do echo $i; sbatch runscript_$i.sh;done&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Interactive X11/GUI jobs ===&lt;br /&gt;
Slurm will forward your X11 credentials to the first (or even all) node for a job with the (undocumented) --x11 option.&lt;br /&gt;
For example, an interactive session for 1 hour with HPL using eigth cores can be started with:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;module load hpl/2.1&lt;br /&gt;
srun --ntasks=1 --cpus-per-task=8 --time=1:00:00 --pty --x11=first xhpl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Monitoring submitted jobs ==&lt;br /&gt;
Once a job is submitted, the status can be monitored using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command. The &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command has a number of parameters for monitoring specific properties of the jobs such as time limit.&lt;br /&gt;
&lt;br /&gt;
=== Generic monitoring of all running jobs ===&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should then get a list of jobs that are running at that time on the cluster, for the example on how to submit using the &#039;sbatch&#039; command, it may look like so:&lt;br /&gt;
    JOBID PARTITION     NAME     USER  ST       TIME  NODES NODELIST(REASON)&lt;br /&gt;
   3396      ABGC BOV-WUR- megen002   R      27:26      1 node004&lt;br /&gt;
   3397      ABGC BOV-WUR- megen002   R      27:26      1 node005&lt;br /&gt;
   3398      ABGC BOV-WUR- megen002   R      27:26      1 node006&lt;br /&gt;
   3399      ABGC BOV-WUR- megen002   R      27:26      1 node007&lt;br /&gt;
   3400      ABGC BOV-WUR- megen002   R      27:26      1 node008&lt;br /&gt;
   3401      ABGC BOV-WUR- megen002   R      27:26      1 node009&lt;br /&gt;
   3385  research BOV-WUR- megen002   R      44:38      1 node049&lt;br /&gt;
   3386  research BOV-WUR- megen002   R      44:38      1 node050&lt;br /&gt;
   3387  research BOV-WUR- megen002   R      44:38      1 node051&lt;br /&gt;
   3388  research BOV-WUR- megen002   R      44:38      1 node052&lt;br /&gt;
   3389  research BOV-WUR- megen002   R      44:38      1 node053&lt;br /&gt;
   3390  research BOV-WUR- megen002   R      44:38      1 node054&lt;br /&gt;
   3391  research BOV-WUR- megen002   R      44:38      3 node[049-051]&lt;br /&gt;
   3392  research BOV-WUR- megen002   R      44:38      3 node[052-054]&lt;br /&gt;
   3393  research BOV-WUR- megen002   R      44:38      1 node001&lt;br /&gt;
   3394  research BOV-WUR- megen002   R      44:38      1 node002&lt;br /&gt;
   3395  research BOV-WUR- megen002   R      44:38      1 node003&lt;br /&gt;
&lt;br /&gt;
=== Monitoring time limit set for a specific job ===&lt;br /&gt;
The default time limit is set at one hour. Estimated run times need to be specified when running jobs. To see what the time limit is that is set for a certain job, this can be done using the &amp;lt;code&amp;gt;squeue&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
squeue -l -j 3532&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Information similar to the following should appear:&lt;br /&gt;
  Fri Nov 29 15:41:00 2013&lt;br /&gt;
   JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
   3532      ABGC BOV-WUR- megen002  RUNNING    2:47:03 3-08:00:00      1 node054&lt;br /&gt;
&lt;br /&gt;
=== Query a specific active job: scontrol ===&lt;br /&gt;
Show all the details of a currently active job, so not a completed job.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
nfs01 ~]$ scontrol show jobid 4241&lt;br /&gt;
JobId=4241 Name=WB20F06&lt;br /&gt;
   UserId=megen002(16795409) GroupId=domain users(16777729)&lt;br /&gt;
   Priority=1 Account=(null) QOS=normal&lt;br /&gt;
   JobState=RUNNING Reason=None Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 ExitCode=0:0&lt;br /&gt;
   RunTime=02:55:25 TimeLimit=3-08:00:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2013-12-09T13:37:29 EligibleTime=2013-12-09T13:37:29&lt;br /&gt;
   StartTime=2013-12-09T13:37:29 EndTime=2013-12-12T21:37:29&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partition=research AllocNode:Sid=nfs01:21799&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=node023&lt;br /&gt;
   BatchHost=node023&lt;br /&gt;
   NumNodes=1 NumCPUs=4 CPUs/Task=1 ReqS:C:T=*:*:*&lt;br /&gt;
   MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0&lt;br /&gt;
   Features=(null) Gres=(null) Reservation=(null)&lt;br /&gt;
   Shared=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Command=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
   WorkDir=/lustre/scratch/WUR/ABGC/...&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Check on a pending job ===&lt;br /&gt;
A submitted job could result in a pending state when there are not enough resources available to this job.&lt;br /&gt;
In this example I sumbit a job, check the status and after finding out is it &#039;&#039;&#039;pending&#039;&#039;&#039; I&#039;ll check when is probably will start.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
[@nfs01 jobs]$ sbatch hpl_student.job&lt;br /&gt;
 Submitted batch job 740338&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue -l -j 740338&lt;br /&gt;
 Fri Feb 21 15:32:31 2014&lt;br /&gt;
  JOBID PARTITION     NAME     USER    STATE       TIME TIMELIMIT  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PENDING       0:00 1-00:00:00      1 (ReqNodeNotAvail)&lt;br /&gt;
&lt;br /&gt;
[@nfs01 jobs]$ squeue --start -j 740338&lt;br /&gt;
  JOBID PARTITION     NAME     USER  ST           START_TIME  NODES NODELIST(REASON)&lt;br /&gt;
 740338 ABGC_Stud HPLstude bohme999  PD  2014-02-22T15:31:48      1 (ReqNodeNotAvail)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
So it seems this job will problably start the next day, but&#039;s thats no guarantee it will start indeed.&lt;br /&gt;
&lt;br /&gt;
== Removing jobs from a list: scancel ==&lt;br /&gt;
If for some reason you want to delete a job that is either in the queue or already running, you can remove it using the &#039;scancel&#039; command. The &#039;scancel&#039; command takes the jobid as a parameter. For the example above, this would be done using the following code:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scancel 3401&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Allocating resources interactively: sallocate ==&lt;br /&gt;
&amp;lt; text here&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Get overview of past and current jobs: sacct ==&lt;br /&gt;
To do some accounting on past and present jobs, and to see whether they ran to completion, you can do:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information similar to the following:&lt;br /&gt;
&lt;br /&gt;
         JobID    JobName  Partition    Account  AllocCPUS      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- ---------- ---------- -------- &lt;br /&gt;
  3385         BOV-WUR-58   research                    12  COMPLETED      0:0 &lt;br /&gt;
  3385.batch        batch                                1  COMPLETED      0:0 &lt;br /&gt;
  3386         BOV-WUR-59   research                    12 CANCELLED+      0:0 &lt;br /&gt;
  3386.batch        batch                                1  CANCELLED     0:15 &lt;br /&gt;
  3528         BOV-WUR-59       ABGC                    16    RUNNING      0:0 &lt;br /&gt;
  3529         BOV-WUR-60       ABGC                    16    RUNNING      0:0&lt;br /&gt;
&lt;br /&gt;
Or in more detail for a specific job:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sacct --format=jobid,jobname,account,partition,ntasks,alloccpus,elapsed,state,exitcode -j 4220&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should provide information about job id 4220:&lt;br /&gt;
&lt;br /&gt;
       JobID    JobName    Account  Partition   NTasks  AllocCPUS    Elapsed      State ExitCode &lt;br /&gt;
  ------------ ---------- ---------- ---------- -------- ---------- ---------- ---------- -------- &lt;br /&gt;
  4220         PreProces+              research                   3   00:30:52  COMPLETED      0:0 &lt;br /&gt;
  4220.batch        batch                              1          1   00:30:52  COMPLETED      0:0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Job Status Codes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Typically your job will be either in the Running state of PenDing state. However here is a breakdown of all the states that your job could be in.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Code!!State!!Description&lt;br /&gt;
|-&lt;br /&gt;
|CA	||CANCELLED||	Job was explicitly cancelled by the user or system administrator. The job may or may not have been initiated.&lt;br /&gt;
|-&lt;br /&gt;
|CD||	COMPLETED||	Job has terminated all processes on all nodes.&lt;br /&gt;
|-&lt;br /&gt;
|CF||	CONFIGURING||	Job has been allocated resources, but are waiting for them to become ready for use (e.g. booting).&lt;br /&gt;
|-&lt;br /&gt;
|CG||	COMPLETING||	Job is in the process of completing. Some processes on some nodes may still be active.&lt;br /&gt;
|-&lt;br /&gt;
|F||	FAILED||	Job terminated with non-zero exit code or other failure condition.&lt;br /&gt;
|-&lt;br /&gt;
|NF||	NODE_FAIL||	Job terminated due to failure of one or more allocated nodes.&lt;br /&gt;
|-&lt;br /&gt;
|PD||	PENDING||	Job is awaiting resource allocation.&lt;br /&gt;
|-&lt;br /&gt;
|R||	RUNNING||	Job currently has an allocation.&lt;br /&gt;
|-&lt;br /&gt;
|S||	SUSPENDED||	Job has an allocation, but execution has been suspended.&lt;br /&gt;
|-&lt;br /&gt;
|TO||	TIMEOUT||	Job terminated upon reaching its time limit.&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Running MPI jobs on B4F cluster ==&lt;br /&gt;
&lt;br /&gt;
[[MPI_on_B4F_cluster | Main article: MPI on B4F Cluster]]&lt;br /&gt;
&amp;lt; text here &amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Understanding which resources are available to you: sinfo ==&lt;br /&gt;
By using the &#039;sinfo&#039; command you can retrieve information on which &#039;Partitions&#039; are available to you. A &#039;Partition&#039; using SLURM is similar to the &#039;queue&#039; when submitting using the Sun Grid Engine (&#039;qsub&#039;). The different Partitions grant different levels of resource allocation. Not all defined Partitions will be available to any given person. E.g., Master students will only have the Student Partition available, researchers at the ABGC will have &#039;student&#039;, &#039;research&#039;, and &#039;ABGC&#039; partitions available. The higher the level of  resource allocation, though, the higher the cost per compute-hour. The default Partition is the &#039;student&#039; partition. A full list of Partitions can be found from the Bright Cluster Manager webpage.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
sinfo&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST&lt;br /&gt;
  student*     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  student*     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  research     up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  research     up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
  ABGC         up   infinite     12  down* node[043-048,055-060]&lt;br /&gt;
  ABGC         up   infinite     50   idle fat[001-002],node[001-042,049-054]&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | BCM on B4F cluster]]&lt;br /&gt;
* [[SLURM_Compare | SLURM compared to other common schedulers]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://slurm.schedmd.com Slurm official documentation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Simple_Linux_Utility_for_Resource_Management Slurm on Wikipedia]&lt;br /&gt;
* [http://www.youtube.com/watch?v=axWffyrk3aY Slurm Tutorial on Youtube]&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=1351</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=1351"/>
		<updated>2014-09-19T09:14:38Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* winSCP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log into the [[B4F_cluster | B4F Cluster]] (more specifically the nfs server) using ssh (default port tcp 22). The address of the nfs server is:&lt;br /&gt;
  nfs01.hpcagrogenomics.wur.nl&lt;br /&gt;
&lt;br /&gt;
To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Window an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the NFS server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[SLURM_on_B4F_cluster  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@nfs01.hpcagrogenomics.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [nfs01.hpcagrogenomics.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
Once logged into the nfs server, it is then possible to log on to any of the worker nodes. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. Before logging onto a node it should be checked whether that node is busy. Status of nodes can be ascertained through the [[ BCM_on_B4F_cluster|BCM Portal]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@nfs01.hpcagrogenomics.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@nfs01.hpcagrogenomics.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== WinSCP ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/WinSCP WinSCP] is a free and open source (S)FTP client for Microsoft Windows. By providing the hostname (nfs01.hpcagronomics.wur.nl), your username, and password, using SFTP protocol and port 22, you can login. After login files can be transferred between a local system (PC) and the cluster.&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, and password, files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on the Cluster. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
=== Samba/CIFS based protocols ===&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. If you enter the following UNC path \\nfs01.hpcagrogenomics.wur.nl\[username] in your Windows client, it will list the available (authenticated) shares (your home directory).&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=1350</id>
		<title>Log in to Anunna</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Log_in_to_Anunna&amp;diff=1350"/>
		<updated>2014-09-19T09:01:13Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* Samba/CIFS based protocols */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Log on using ssh ==&lt;br /&gt;
One can log into the [[B4F_cluster | B4F Cluster]] (more specifically the nfs server) using ssh (default port tcp 22). The address of the nfs server is:&lt;br /&gt;
  nfs01.hpcagrogenomics.wur.nl&lt;br /&gt;
&lt;br /&gt;
To log on one has to use an ssh ([http://en.wikipedia.org/wiki/Secure_Shell secure shell]) client. Such client systems are always available from Linux or MacOS systems. For Window an ssh-client may need to be installed. The most popular ssh-client for Windows is [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY].&lt;br /&gt;
&lt;br /&gt;
Note that current access may be restricted to certain IP-ranges. Furthermore, ssh-protocols may be prohibited on systems where port 22 is unavailable due to firewall.&lt;br /&gt;
&lt;br /&gt;
The ssh-connection can also be configured to work [[ssh_without_password | without password]], which means that no password needs to be provided at each log-in or secure copy attempt.&lt;br /&gt;
&lt;br /&gt;
  &#039;&#039;&#039;IMPORTANT: the NFS server can only act as access point and is not to be used for any serious CPU or RAM intensive work.&#039;&#039;&#039; &lt;br /&gt;
  &#039;&#039;&#039;Anything requiring even moderate resources should be [[SLURM_on_B4F_cluster  |scheduled using SLURM!]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== CLI from a Linux/MacOSX terminal ===&lt;br /&gt;
A Command Line Interface ([http://en.wikipedia.org/wiki/Command-line_interface CLI]) ssh client is available from any Linux or MacOSX terminal. Secure shell (ssh) protocols require port 22 to be open. Should a connection be refused, the firewall settings of the system should be checked. Alternatively, local ICT regulations may prohibit the use of port 22. Wageningen UR FB-ICT for instance does not allow traffic through port 22 over WiFi to certain systems.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@nfs01.hpcagrogenomics.wur.nl&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== PuTTY on Windows ===&lt;br /&gt;
Putty is a free, powerful, and widely used SSH client that runs on Windows.&lt;br /&gt;
It is extremely useful for those people who have a computer running Windows&lt;br /&gt;
on their desk but must remotely connect to a computer running UNIX/Linux.&lt;br /&gt;
Putty is one of a set of utilities that all work together to provide&lt;br /&gt;
convenient connectivity between Windows and UNIX/Linux environments.&lt;br /&gt;
Some of these utilities include:&lt;br /&gt;
&lt;br /&gt;
* Putty -- the SSH client&lt;br /&gt;
* Pageant -- the authentication agent used with Putty&lt;br /&gt;
* Puttygen -- the RSA key generation utility&lt;br /&gt;
* Pscp -- the SCP secure file copy utility&lt;br /&gt;
&lt;br /&gt;
Depending on your tasks, the above utilities are probably your minimum&lt;br /&gt;
set of tools to make convenient connections and file transfers between a&lt;br /&gt;
computer running Windows and a computer running UNIX/Linux.&lt;br /&gt;
&lt;br /&gt;
==== Putty Configuration ====&lt;br /&gt;
&lt;br /&gt;
Putty is able to store the configuration or connection profiles for a&lt;br /&gt;
number of remote UNIX/Linix clients.  Each of profile can be created&lt;br /&gt;
and later edited by Right-clicking on a putty window header and choosing&lt;br /&gt;
&amp;quot;New Session...&amp;quot;.  The minimum set of items that need to be configured for&lt;br /&gt;
a given connection are:&lt;br /&gt;
&lt;br /&gt;
* Session&lt;br /&gt;
** Host Name [nfs01.hpcagrogenomics.wur.nl]&lt;br /&gt;
** Saved Session name [your name for this connection]&lt;br /&gt;
* Terminal&lt;br /&gt;
** Keyboard&lt;br /&gt;
*** Backspace key -&amp;gt; Control-H&lt;br /&gt;
* Connection&lt;br /&gt;
** Data&lt;br /&gt;
*** Auto-login username [your remote username]&lt;br /&gt;
** SSH&lt;br /&gt;
*** Auth&lt;br /&gt;
**** Private key file for authentication [pathname to your .ppk file]&lt;br /&gt;
&lt;br /&gt;
Obviously, there are many other useful things that can be configured and&lt;br /&gt;
customized in Putty but the above list should be considered a minimum.&lt;br /&gt;
Please note that after making any change to a putty session you must&lt;br /&gt;
explicitly save your changes.&lt;br /&gt;
&lt;br /&gt;
==== Creating an SSH Key Pair ====&lt;br /&gt;
&lt;br /&gt;
Puttygen is the utility used for creating both a .ppk file (private&lt;br /&gt;
key) and the public authorized key information.  Briefly, here are&lt;br /&gt;
the steps needed to create a key pair:&lt;br /&gt;
&lt;br /&gt;
* Run (double-click) the Puttygen application&lt;br /&gt;
* Click on &amp;quot;Generate&amp;quot;&lt;br /&gt;
* Replace the comment with something meaningful -- maybe your name&lt;br /&gt;
* Type in your passphrase (password) twice&lt;br /&gt;
* Save the .ppk file in a secure location on your Windows computer&lt;br /&gt;
* Use your mouse to copy the public key string then paste it into the ~/.ssh/authorized_keys file on the remote computer&lt;br /&gt;
&lt;br /&gt;
Note: The full pathname of this .ppk file is used in the last step of Putty&lt;br /&gt;
configuration as described above.&lt;br /&gt;
&lt;br /&gt;
==== Using Pageant as an Interface for Putty ====&lt;br /&gt;
&lt;br /&gt;
Pageant is a Putty helper program that is used for two main purposes:&lt;br /&gt;
&lt;br /&gt;
* Pageant is used to hold the passphrase to your key pair&lt;br /&gt;
* Pageant is used as a convenience application to run a Putty session from any of your saved profiles&lt;br /&gt;
&lt;br /&gt;
There is no configuration needed in Pageant.  You simply need to&lt;br /&gt;
run this program at login.  Any easy way to do this is to create a&lt;br /&gt;
shortcut in your startup folder that points to the Pageant executable.&lt;br /&gt;
Once this has been done, every time you log in you will see a little&lt;br /&gt;
icon of a computer with a hat in your taskbar.  The first step in using&lt;br /&gt;
this is to right-click on it and select &amp;quot;Add Key&amp;quot;.  Navigate to your&lt;br /&gt;
.ppk file and select &amp;quot;Open&amp;quot;.  It will prompt you for your passphrase.&lt;br /&gt;
At this point your passphrase has been conveniently stored for you so&lt;br /&gt;
that when you use Putty to connect to your various remote computers,&lt;br /&gt;
you won&#039;t have to type in your passphrase over and over again.&lt;br /&gt;
The next step is to right-click on the Pageant icon again and select&lt;br /&gt;
one of your saved sessions.  If you have done everything correctly&lt;br /&gt;
you will be logged right in so that you no longer have to type your&lt;br /&gt;
passphrase.&lt;br /&gt;
&lt;br /&gt;
== Log on to worker nodes ==&lt;br /&gt;
&lt;br /&gt;
Once logged into the nfs server, it is then possible to log on to any of the worker nodes. Logging on to the worker nodes does not require password authentication, you should therefore not be prompted to provide a password. Before logging onto a node it should be checked whether that node is busy. Status of nodes can be ascertained through the [[ BCM_on_B4F_cluster|BCM Portal]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh [user name]@[node name]&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For instance:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
ssh dummy001@node049&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is not permitted to run jobs outside the scheduling software (slurm). So logging on to a worker node is for analyses of running jobs only.&lt;br /&gt;
&lt;br /&gt;
== File transfer using ssh-based file transfer protocols ==&lt;br /&gt;
=== Copying files to/from the cluster: scp ===&lt;br /&gt;
&lt;br /&gt;
From any Posix-compliant system (Linux/MacOSX) terminal files and folder can be transferred to and from the cluster using an ssh-based file copying protocol called scp ([http://en.wikipedia.org/wiki/Secure_copy secure copy]). For instance, copying a folder containing several files from scomp1090/lx6 can be achieved like this:&lt;br /&gt;
&lt;br /&gt;
Syntax of the scp command requires from-to order:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
scp -pr /home/WUR/[username]/folder_to_transfer [username]@nfs01.hpcagrogenomics.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This example assumes a user that is part of the ABGC user group. See the [[Lustre_PFS_layout | Lustre Parallel File System layout]] page for further details. The -p flag will preserve the file metadata such as timestamps. The -r flag allows for recursive copying. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
The [http://en.wikipedia.org/wiki/Rsync rsync protocol], like the scp protocol, allow CLI-based copying of files. The rsync protocol, however, will only transfer those files between systems that have changed, i.e. it synchronises the files, hence the name. The rsync protocol is very well suited for making regular backups and file syncs between file systems. Like the scp command, syntax is in the from-to order.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync &amp;lt;source&amp;gt; &amp;lt;destination&amp;gt;&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
e.g.:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
rsync -av /home/WUR/[username]/folder_to_transfer [username]@nfs01.hpcagrogenomics.wur.nl:/lustre/scratch/WUR/ABGC/&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
The -a flag will preserve file metadata and allows for recursive copying, amongst others. The -v flag provides verbose output. Further options can be found in the [http://en.wikipedia.org/wiki/Man_page man pages].&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
man scp&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== winSCP ===&lt;br /&gt;
&amp;lt;need a windows user as a volunteer to write some text....&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== FileZilla ===&lt;br /&gt;
[http://en.wikipedia.org/wiki/Filezilla FileZilla] is a free and open source graphical (S)FTP client. It is available for Linux, MacOSX, and Windows. By providing the address, username, and password, files can be transferred between a local system and the cluster. Furthermore, the graphical interface allows for easy browsing of files on the Cluster. Detailed instruction can be found on the [https://wiki.filezilla-project.org/Using FileZilla Wiki].&lt;br /&gt;
&lt;br /&gt;
=== Samba/CIFS based protocols ===&lt;br /&gt;
The Common Interface File System ([http://en.wikipedia.org/wiki/Cifs CIFS]) is commonly used in and between Windows systems for file sharing. It is only available to clients within WURnet. If you enter the following UNC path \\nfs01.hpcagrogenomics.wur.nl\[username] in your Windows client, it will list the available (authenticated) shares (your home directory).&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[B4F_cluster | B4F Cluster]]&lt;br /&gt;
* [[BCM_on_B4F_cluster | Monitor cluster status with BCM]]&lt;br /&gt;
* [[SLURM_on_B4F_cluster | Submit jobs with Slurm]]&lt;br /&gt;
* [[ssh_without_password | ssh without password]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Secure_Shell secure shell on Wikipedia]&lt;br /&gt;
* [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY homepage]&lt;br /&gt;
* [http://winscp.net/eng/index.php WinSCP homepage]&lt;br /&gt;
* [https://filezilla-project.org FileZilla homepage]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cifs The Common Interface File System (CIFS) on Wikipedia]&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=Globally_installed_software&amp;diff=1349</id>
		<title>Globally installed software</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=Globally_installed_software&amp;diff=1349"/>
		<updated>2014-09-03T08:54:48Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* Adding a custom module directory to your environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Available as modules ==&lt;br /&gt;
* gcc/4.8.1&lt;br /&gt;
* python/2.7.6&lt;br /&gt;
* python/3.3.3&lt;br /&gt;
* R/3.0.2&lt;br /&gt;
&lt;br /&gt;
== Globally installed on all nodes ==&lt;br /&gt;
&lt;br /&gt;
* Perl5.10&lt;br /&gt;
* pigz&lt;br /&gt;
* Python2.6&lt;br /&gt;
* BioPerl v1.61&lt;br /&gt;
* [http://samtools.sourceforge.net/tabix.shtml bgzip]&lt;br /&gt;
* [http://samtools.sourceforge.net/tabix.shtml tabix v0.2.5]&lt;br /&gt;
&lt;br /&gt;
== Available as global SHARED modules ==&lt;br /&gt;
Software can be deposited in:&lt;br /&gt;
  /cm/shared/apps/SHARED/&lt;br /&gt;
&lt;br /&gt;
Modules can be found in:&lt;br /&gt;
  /cm/shared/modulefiles/SHARED/&lt;br /&gt;
&lt;br /&gt;
{| width=&amp;quot;90%&amp;quot;&lt;br /&gt;
|- valign=&amp;quot;top&amp;quot;&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [[allpathslg_48961 | ALLPATHS-LG/48961]]&lt;br /&gt;
* [[augustus_2.7 | augustus/2.7]]&lt;br /&gt;
* [[bedtools2.18 | bedtools/2.18.0]]&lt;br /&gt;
* [[BLAST | BLAST+/2.2.28]]   &lt;br /&gt;
* [[blat_v35 |blat/v35]]&lt;br /&gt;
* [[bowtie2_v2.2.1 | bowtie/2-2.2.1]]&lt;br /&gt;
* [[bowtie1_v1.0.0 | bowtie/1-1.0.0]]&lt;br /&gt;
* [[bwa_5.9 | bwa/0.5.9]]   &lt;br /&gt;
* [[bwa_7.5 | bwa/0.7.5a]]&lt;br /&gt;
* [[cegma_2.4 | cegma/2.4]]   &lt;br /&gt;
* [[Cufflinks | cufflinks/2.1.1]]&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [[exonerate_2.2.0 | exonerate/2.2.0-x86_64]] &lt;br /&gt;
* [[geneid_1.4.4 | geneid/1.4.4]]&lt;br /&gt;
* [[genewise_2.2.3 | genewise/2.2.3-rc7]]     &lt;br /&gt;
* [[gmap_2014-01-21 | gmap/2014-01-21]]&lt;br /&gt;
* [[hmmer_3.1 | hmmer/3.1b1]]&lt;br /&gt;
* [[jellyfish_2.1.1 | jellyfish/2.1.1]]&lt;br /&gt;
* [[MAFFT_7.130 | MAFFT/7.130]]&lt;br /&gt;
* [[maker_2.2.8 | maker/2.28]]&lt;br /&gt;
* [[Muscle_3.8.31 | muscle/3.8.31]]     &lt;br /&gt;
* [[Plink_1.07 | Plink/1.07]]&lt;br /&gt;
* [[Provean_1.1.3 | provean/1.1.3]]  &lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;30%&amp;quot; |&lt;br /&gt;
* [[RepeatMasker_4.0.3 | RepeatMasker/4.0.3]]&lt;br /&gt;
* [[RepeatModeler_1.0.7 | RepeatModeler/1.0.7]]&lt;br /&gt;
* [[RAxML8.0.0 | RAxML/8.0.0]]&lt;br /&gt;
* [[samtools v0.1.12a | samtools/0.1.12a]]&lt;br /&gt;
* [[samtools v0.1.19 | samtools/0.1.19]]&lt;br /&gt;
* [[snap | snap/2013-11-29]]&lt;br /&gt;
* [[soapdenovo2_r240 | SOAPdenovo2/r240]]&lt;br /&gt;
* [[sra_toolkit_2.3.4 | sra-toolkit/2.3.4]]&lt;br /&gt;
* [[TopHat_2.0.11 | tophat/2.0.11]]&lt;br /&gt;
* [[Trinity_r20131110 | Trinity/r20131110]]&lt;br /&gt;
* [[wgs_assembler_8.1 | wgs-assembler/8.1]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Adding a custom module directory to your environment ==&lt;br /&gt;
To allow the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; program to find the custom module directory, the location of that directory has to be added to &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
export MODULEPATH=$MODULEPATH:/cm/shared/apps/WUR/ABGC/modulefiles&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This can be made permanent by adding this line of code to the &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; file in the root of your home folder. To then load the modified &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable you have to load  &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; again:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source .bash_profile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This needs to be done only for terminals that are already open. Next time you login, &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; will be loaded automatically.&lt;br /&gt;
&lt;br /&gt;
You can check if the modules are found.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should give output that includes something similar to this:&lt;br /&gt;
&lt;br /&gt;
  ---------------------------------------- /cm/shared/modulefiles/ -----------------------------------------&lt;br /&gt;
  ALLPATHS-LG/48961      bwa/0.7.5a             jellyfish/2.1.1        RepeatMasker/4.0.3&lt;br /&gt;
  augustus/2.7           cegma/2.4              MAFFT/7.130            RepeatModeler/1.0.7&lt;br /&gt;
  bedtools/2.18.0        cufflinks/2.1.1        maker/2.28             samtools/0.1.12a&lt;br /&gt;
  BLAST+/2.2.28          exonerate/2.2.0-x86_64 muscle/3.8.31          samtools/0.1.19&lt;br /&gt;
  blat/v35               geneid/1.4.4           plink/1.07             snap/2013-11-29&lt;br /&gt;
  bowtie/2-2.2.1         genewise/2.2.3-rc7     provean/1.1.3          SOAPdenovo2/r240&lt;br /&gt;
  bwa/0.5.9              hmmer/3.1b1            RAxML/8.0.0            tophat/2.0.11&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Main_Page#Using_the_B4F_Cluster | Using the B4F Cluster]]&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
* [[ABGC_modules | modules specific for ABGC ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
	<entry>
		<id>https://wiki.anunna.wur.nl/index.php?title=ABGC_modules&amp;diff=1346</id>
		<title>ABGC modules</title>
		<link rel="alternate" type="text/html" href="https://wiki.anunna.wur.nl/index.php?title=ABGC_modules&amp;diff=1346"/>
		<updated>2014-07-11T14:03:34Z</updated>

		<summary type="html">&lt;p&gt;Hulze001: /* Modules available for ABGC */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All ABGC modules can be found in:&lt;br /&gt;
&lt;br /&gt;
  /cm/shared/apps/WUR/ABGC/modulefiles&lt;br /&gt;
&lt;br /&gt;
== Modules available for ABGC ==&lt;br /&gt;
&lt;br /&gt;
* [[asreml_3.0 | asreml/3.0fl-64]]&lt;br /&gt;
* [[asreml_4.0 | asreml/4.0kr]]&lt;br /&gt;
* [[Perl5.10_WUR_module | Perl/5.10.1_wur]]&lt;br /&gt;
* [[R3.0.2_WUR_module | R/3.0.2_wur]]&lt;br /&gt;
&lt;br /&gt;
== Adding a custom module directory to your environment ==&lt;br /&gt;
To allow the &amp;lt;code&amp;gt;module&amp;lt;/code&amp;gt; program to find the custom module directory, the location of that directory has to be added to &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
export MODULEPATH=$MODULEPATH:/cm/shared/apps/WUR/ABGC/modulefiles&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This can be made permanent by adding this line of code to the &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; file in the root of your home folder. To then load the modified &amp;lt;code&amp;gt;MODULEPATH&amp;lt;/code&amp;gt; variable you have to load  &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; again:&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
source .bash_profile&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This needs to be done only for terminals that are already open. Next time you login, &amp;lt;code&amp;gt;.bash_profile&amp;lt;/code&amp;gt; will be loaded automatically.&lt;br /&gt;
&lt;br /&gt;
You can check if the modules are found.&lt;br /&gt;
&amp;lt;source lang=&#039;bash&#039;&amp;gt;&lt;br /&gt;
module avail&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
This should give output that includes something similar to this:&lt;br /&gt;
&lt;br /&gt;
  ----------------------------------- /cm/shared/apps/WUR/ABGC/modulefiles -----------------------------------&lt;br /&gt;
  bwa/0.5.9  bwa/0.7.5a&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Main_Page#Using_the_B4F_Cluster | Using the B4F Cluster]]&lt;br /&gt;
* [[Globally_installed_software | Globally installed software]]&lt;br /&gt;
* [[Domain_specific_software_on_B4Fcluster_installation_by_users | Installing domain specific software: installation by users]]&lt;br /&gt;
* [[Setting local variables]]&lt;br /&gt;
* [[Installing_R_packages_locally | Installing R packages locally]]&lt;br /&gt;
* [[Setting_up_Python_virtualenv | Setting up and using a virtual environment for Python3 ]]&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;/div&gt;</summary>
		<author><name>Hulze001</name></author>
	</entry>
</feed>