EMPI4Re

The EMPI4Re library is a central component for the EPiGRAM project. The relatively small size (∼26,000 lines of code compared with ∼770,000 in OpenMPI v1.8.1) and relatively low complexity of the code base compared to production MPI libraries makes the EMPI4Re library a good vehicle for testing ideas aimed at moving towards an Exascale MPI+PGAS library. The valuable information discovered by researching these ideas using the EMPI4Re library will be disseminated to the MPI Forum and to the development teams for MPICH and OpenMPI. This work in EPiGRAM will further strengthen the case for standardisation of such new functionality in the MPI Standard.
Using the EMPI4Re library has allowed us to research the following topics:
• MPI endpoints – a current proposal to the MPI Forum that aims to allow MPI to interoperate better with other programming models, especially those that use multiple threads, e.g. OpenMP. This involves implementing the new thread support levels proposed and published by WP4 of this project [16, 18, 19].

• Persistent collectives – a current proposal in the MPI Forum that aims to build on the non-blocking collectives, which were standardised in MPI 3.0, to enable additional optimisations. This involved investigating how to implement persistent collectives and the new zero- copy collective algorithms proposed and published by WP2 in this project [26, 27].

In addition, several scalability improvements are being investigated via the EMPI4Re library:

• Reducing the memory-footprint of the EMPI4Re library by condensing the communicator data-structures
• Reducing the memory-footprint of the EMPI4Re library by condensing the collective signalling data-structures
• Reducing the time-complexity of the EMPI4Re library by partitioning the point-to-point data-structures

The modifications to the EMPI4Re library will be tested and measured in isolation and by integration into, and use in, the two EPiGRAM pilot applications.
Papers:
  • Daniel J. Holmes, Jesper Larsson Traff, Pekka Manninen, Alistair Hart, Harvey Richardson, Valeria Bartsch, and Ivy Bo Peng. Design document for RDMA message passing. D4.2 (Not online yet)
  • Daniel J. Holmes. Context ID allocation for MPI endpoints. (In preparation)
  • Daniel J. Holmes, Mark Bull, and James Dinan. New thread support levels for MPI endpoints. In Proceedings of EASC 2015
  • Daniel J. Holmes and Mark Bull. A new thread support level for hybrid programming with MPI endpoints. url: http://www.easc2015.ed.ac.uk/program-archive/slides/s17aHolmes.pdf
  • Daniel J. Holmes, Anthony Skjellum, and Purushottam V Bangalore. Persistent collective operations in MPI. Proceedings of ExaMPI 2015
  • Jesper Larsson Tr ̈aff and Antoine Rougier. MPI collectives and datatypes for hierarchical all-to-all communication. In Recent Advances in Message Passing Interface. (EuroMPI/ASIA), pages 27–32, 2014
  • Jesper Larsson Tr ̈aff, Antoine Rougier, and Sascha Hunold. Implementing a classic: Zero-copy all-to-all communication with MPI datatypes. In 28th ACM International Conference on Supercomputing (ICS), pages 135–144, 2014