DocumentCode :
2958843
Title :
Hierarchical Local Storage: Exploiting Flexible User-Data Sharing Between MPI Tasks
Author :
Tchiboukdjian, Marc ; Carribault, Patrick ; Pérache, Marc
Author_Institution :
Exascale Comput. Res., Versailles, France
fYear :
2012
fDate :
21-25 May 2012
Firstpage :
366
Lastpage :
377
Abstract :
With the advent of the multicore era, the number of cores per computational node is increasing faster than the amount of memory. This diminishing memory to core ratio sometimes even prevents pure MPI applications to benefit from all cores available on each node. A possible solution is to add a shared memory programming model like Open MP inside the application to share variables between Open MP threads that would otherwise be duplicated for each MPI task. Going to hybrid can thus improve the overall memory consumption, but may be a tedious task on large applications. To allow this data sharing without the overhead of mixing multiple programming models, we propose an MPI extension called Hierarchical Local Storage (HLS) that allows application developers to share common variables between MPI tasks on the same node. HLS is designed as a set of directives that preserve the original parallel semantics of the code and are compatible with C, C++ and Fortran languages and the Open MP programming model. This new mechanism is implemented inside a state-of-the-art MPI 1.3 compliant runtime called MPC. Experiments show that the HLS mechanism can effectively reduce memory consumption of HPC applications. Moreover, by reducing data duplication in the shared cache of modern multicores, the HLS mechanism can also improve performances of memory intensive applications.
Keywords :
application program interfaces; data handling; message passing; parallel programming; shared memory systems; C languages; C++ languages; Fortran languages; HLS; MPI tasks; Open MP; core ratio; data sharing; diminishing memory; exploiting flexible user data sharing; hierarchical local storage; multiple programming models; open MP programming model; overall memory consumption; parallel semantics; shared memory programming model; Computational modeling; Data models; Instruction sets; Memory management; Multicore processing; Programming; Semantics; High-Performance Computing; Memory Consumption; Parallel Programming Model;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel & Distributed Processing Symposium (IPDPS), 2012 IEEE 26th International
Conference_Location :
Shanghai
ISSN :
1530-2075
Print_ISBN :
978-1-4673-0975-2
Type :
conf
DOI :
10.1109/IPDPS.2012.42
Filename :
6267874
Link To Document :
بازگشت