MPI on B4F cluster

From HPCwiki
Revision as of 23:30, 12 December 2013 by Megen002 (talk | contribs) (Created page with " module list module purge module load gcc/4.8.1 openmpi/gcc/64/1.6.5 slurm/2.5.7 mpicc hello_mpi.c -o test_hello_world ldd test_hello_world linux-vdso.so.1 => (0x00002a...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

module list

module purge

module load gcc/4.8.1 openmpi/gcc/64/1.6.5 slurm/2.5.7

mpicc hello_mpi.c -o test_hello_world ldd test_hello_world

 linux-vdso.so.1 =>  (0x00002aaaaaacb000)
 libmpi.so.1 => /cm/shared/apps/openmpi/gcc/64/1.6.5/lib64/libmpi.so.1 (0x00002aaaaaccd000)
 libdl.so.2 => /lib64/libdl.so.2 (0x00002aaaab080000)
 libm.so.6 => /lib64/libm.so.6 (0x00002aaaab284000)
 libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x0000003e29400000)
 librt.so.1 => /lib64/librt.so.1 (0x00002aaaab509000)
 libnsl.so.1 => /lib64/libnsl.so.1 (0x00002aaaab711000)
 libutil.so.1 => /lib64/libutil.so.1 (0x00002aaaab92a000)
 libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aaaabb2e000)
 libc.so.6 => /lib64/libc.so.6 (0x00002aaaabd4b000)
 /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)

srun --nodes=2 --ntasks-per-node=4 --partition=ABGC --mpi=openmpi ./test_hello_world

 Hello MPI! Process 4 of 8 on node011
 Hello MPI! Process 1 of 8 on node010
 Hello MPI! Process 7 of 8 on node011
 Hello MPI! Process 6 of 8 on node011
 Hello MPI! Process 5 of 8 on node011
 Hello MPI! Process 2 of 8 on node010
 Hello MPI! Process 0 of 8 on node010
 Hello MPI! Process 3 of 8 on node010