Using infinband on maxwell
Infiniband is a high performance interconnect that can be used as a
replacement for ethernet or fibre channel.
It can also be used as an interconect to run mpi over. It is the latter
we use it for.
We have connected with infininiband, the 16 dual cpu nodes of maxwell
with 8gb of memory.
The interconnect has a latency of 5µ latency and a bandwidth of
800mbytes/s.
For programs to use the infiniband interconnect rather than the
ethernet, they need to be compiled and linked with the relevant
libraries and headers.
The easiest way to do this is to use the wrapper scripts ibpathf90,
ibpathcc, ibpathCC for fortran, C and C++ respectively.
At run time you also need to set an environment variable so that the
correct mpirun is used. This is done by setting the environment
variable HPCF_IB.
Finally obviously you need to make sure that the job is submitted to
the nodes that have the infiniband attached. This is acchieved by using
the queues that have a -l suffix.
i.e. u2-l, u4-l, u8-l, u16-l, u32-l, t2-l, t4-l, t8-l, t16-l, t32-l,
s2-l, s4-l, s8-l, s16-l or s32-l
If we look at a simple example of all of this put together
-bash-2.05b$ ibpathf90 hello.f90
-bash-2.05b$cat script
#!/bin/bash
export HPCF_IB=yes
mpirun -np 32 a.out
-bash-2.05b$ qsub -Q u32-l script
-bash-2.05b$