Several MPI implementations are available on the Atos HPCF: OpenMPI (provided by Atos), Intel MPI and Mellanox HPC-X (OpenMPI based).
They are not compatible amongst them, so you should only use one to build your entire software stack. Support libraries are provided for the different flavours to guarantee maximum compatibility.
Building your MPI programs
First, you need to decide which compiler family and MPI flavour you will use, and load them with modules.
$ module load prgenv/gnu openmpi $ module list Currently Loaded Modules: 1) gcc/8.3.1 2) prgenv/gnu 3) openmpi/4.0.5.1
$ module load prgenv/intel intel-mpi $ module list Currently Loaded Modules: 1) intel/19.1.2 2) prgenv/intel 3) intel-mpi/19.1.2
Then, you may use the usual MPI compiler wrappers to compile your programs:
Language | Intel MPI with Intel compilers | Intel MPI with GNU compilers | OpenMPI |
---|---|---|---|
C | mpiicc | mpigcc | mpicc |
C++ | mpiicpc | mpigxx | mpicxx |
Fortran | mpiifort | mpif90 | mpifort |
When using Intel MPI it is important to use the correct compiler wrapper depending on whether you want to use Intel or GNU compilers.
Running your MPI programs
You should run your MPI programs in a Slurm batch script, using srun
to start your MPI execution. Srun inherits it's configuration from the job set up, so no extra options such as number of tasks or threads need to be passed. It is only necessary if wishing to run an MPI execution with a different (smaller) configuration within the same job.
Depending on the implementation, you may also use the corresponding mpiexec
command, but its use is discouraged.