GASPI-GPI


PGAS (Partitioned Global Address Space) programming models have been discussed as an alternative to MPI for some time. The PGAS approach offers the developer an abstract shared address space which simplifies the programming task and at the same time facilitates: data-locality, thread-based programming and asynchronous communication. GASPI stands for Global Address Space Programming Interface and is a Partitioned Global Address Space (PGAS) API. It aims at extreme scalability, high flexibility and failure tolerance for parallel computing environments. GASPI aims to initiate a paradigm shift from bulk-synchronous two-sided communication patterns towards an asynchronous communication and execution model. To that end GASPI leverages remote completion and one-sided RDMA driven communication in a Partitioned Global Address Space.

During the first year of the EPiGRAM project the previously German GASPI project has become an open standardization forum driven by European partners. Several EPiGRAM partners were founding members of the GASPI Forum. GPI-2 is an open source implementation of the GASPI standard and freely available to application developers and researchers. For commercial users Fraunhofer ITWM offers a commercial license and support.

Traditionally GPI is used in visualization and seismic imaging applications. In the past years GPI has been ported to several scientific applications within projects, like EPiGRAM, EXA2CT and GASPI. Strong scaling with over 90% of parallel efficiency has been shown at the Haswell extension of the SuperMUC cluster on 84000 cores.

Threads are the suggested way to handle parallelism within nodes. The GASPI API is thread-safe and allows each thread to post requests and wait for notifications. Any threading approach (POSIX threads, MCTP, OpenMP) is supported since it is orthogonal to the GASPI communication model.

GPI-2 supports interoperability with MPI in order to allow for incremental porting of applications. GPI-2 supports this interoperability in a so-called mixed-mode, where MPI and GASPI interfaces can be mixed. These mixed modes have been tested within the EPiGRAM collaboration with the IPIC3D and NEK5000 code. There are however still restrictions and a closer interplay of the memory and communication management of GPI and MPI is envisaged.

An interface allowing interoperability concerning the memory management has recently been established in the GASPI standard. The idea originated from a gap analysis of the EPiGRAM collaboration. GASPI handles memory spaces in so-called segments, which are accessible from every thread of every GASPI process. The GASPI standard has been extended to allow the user to provide an already existing memory buffer as the memory space of a GASPI segment. This new function will allow future applications to communicate data from memory that is not allocated by the GASPI runtime system but provided to it (e.g. by MPI). This new feature has to be tested in real applications.

A second proposal to the GASPI forum which originated from the work of the EPiGRAM collaboration has been accepted and entered the GASPI standard. The main motivation for this proposal is to improve the ability to create GASPI-based libraries. Currently, a GASPI-based library can use one of the available queues but has to (potentially) share it with an application or other libraries. Moreover, a clear separation of concerns is desirable from a library point of view. A library is only interested in waiting for data or notication requests that are relevant to its own internal operation.

Thus the EPiGRAM collaboration and the GASPI Forum work fruitfully together. The usage of GPI-2 by the NEK5000 and IPiC3D communication kernels drives the development of GASPI and GPI towards a true exascale communication model.

:

 

Best Practice Guide for Writing GASPI can be found on this link. Courtesy of the INTERTWinE project.

More information can be found on those links: