The Project


EPiGRAM – Exascale ProGRAmming ┬áModels

Exascale computing power will likely be reached in the next decade. While the precise system architectures are still evolving, one can safely assume that they will be largely based on deep hierarchies of multicore CPUs with similarly-deep memory hierarchies, potentially also supported by accelerators. New and disruptive programming models are needed to allow applications to run efficiently at large scale on these platforms. The Message Passing Interface (MPI) has emerged as the de-facto standard for parallel programming on current petascale machines; but Partitioned Global Address Space (PGAS) languages and libraries are increasingly being considered as alternatives or complements to MPI. However, both approaches have severe problems that will prevent them reaching exascale performance.

Objectives and Actions:

We have identified five main objectives and relative actions for EPiGRAM:

Exascale MPI

MPI is currently the de facto standard for HPC systems and applications. However, MPI 3.0 is already a very large standard, and proliferating with new concepts, which is unfortunate from the programmability point of view, and very likely indicative that some more fundamental changes in the model will be needed when going from peta- to exascale.

Action: We will investigate innovative and disruptive concepts in the MP programming model, and implement them to tackle the challenge of MP scalability on exascale computer systems. Especially important is to investigate how MPI (or better: an MPI-like message-passing model/interface) can coexist with other models, like PGAS. This will be highly useful for application programmers working at extreme scale, and will be implementable with high-efficiency on exascale system.

Exascale PGAS

Over the last decade PGAS languages have emerged as an alternative programming model to MPI and promising candidates for better programmability and better efficiency. In the EPiGRAM project, we will focus on the GPI library developed by the project partner Fraunhofer.

Action: We will first investigate disruptive concepts in PGAS programming models, such as scalable collective operations based on the one-sided communication model, and improved synchronization mechanisms, and then implement them in GPI. We will investigate fault tolerance strategies in the PGAS programming model and then implement them in GPI. We will try out different methods for the interaction between the communication library and the application, like notifications of the application about suspicious resources or timeouts when trying to communicate to faulty nodes. To allow libraries to be executed in a separated communication domain, we will investigate implementations of segmentation of memory and dynamic allocation of resources like communication queues and implement them in GPI.

Programming models for diverse memory spaces

Modern processors and HPC node architectures exhibit diverse and hierarchical memory spaces, including the use of caches, Non-Uniform Memory Architectures (NUMA) and separate accelerator or coprocessor memory spaces. The expected trend as we move towards exascale (dictated by power costs) is for this memory hierarchy to deepen, and for user- management to remain important.

Action: We will first investigate the state of the art in memory-hierarchy aware communications models and implementations. This information will be communicated to relevant standards bodies, with suggestions and proposals where appropriate. We will explore the most efficient ways to use available communications models and libraries (especially those developed in EPiGRAM) in HPC systems with hierarchical memory models. This will be done using appropriate benchmark codes, representative EPiGRAM application kernels and the full EPiGRAM applications.

Exascale PGAS-based MPI

PGAS and MP programming models are usually treated as distinct alternatives with different strengths and weaknesses. In the MP model communication and synchronization are combined in a single operation (passing messages). In contrast, PGAS programming models use separate operations for communication and synchronisation. As a result these models are less likely to suffer from unnecessary data copies or synchronisation but typically provide less fine control over synchronization or network use.

Action: We will bridge the gap between the two approaches by investigating the two programming models where the appropriate constructs are used depending on the requirements of the application. This project will determine the necessary pre- requisites needed to support these hybrid programming models at extreme scale and demonstrate the viability of this approach by producing open source software libraries to efficiently support these hybrid models.

Exascale-ready applications

We will prepare two applications (Nek5000 and iPIC3D) for exascale computer systems by re- designing and implementing their communication kernels. These will allow us to reach results that cannot be obtained on current petascale systems.

Action: We will use the exascale MP implementation and PGAS libraries in two real-world applications, Nek5000 and iPIC3D, to prepare them to exascale. We will develop of new exascale communication kernels in Nek5000 and iPIC3D with goal of achieving high scalability, enabling new science to be carried out.

_DSC0184-bearb-2
Photo: EPiGRAM team meeting in Kaiserslautern, Germany, March 16th 2016


EPiGRAM Flyer

EPiGRAM Fact Sheet