Title :
OMPI: Optimizing MPI Programs using Partial Evaluation
Author :
Ogawa, Hirotaka ; Matsuoka, Satoshi
Author_Institution :
The University of Tokyo, Japan
Abstract :
MPI is gaining acceptance as a standard for message-passing in high-performance computing, due to its powerful and flexible support of various communication styles. However, the complexity of its API poses significant software overhead, and as a result, applicability of MPI has been restricted to rather regular, coarse-grained computations. Our OMPI (Optimizing MPI) system removes much of the excess overhead by employing partial evaluation techniques, which exploit static information of MPI calls. Because partial evaluation alone is insufficient, we also utilize template functions for further optimization. To validate the effectiveness for our OMPI system, we performed baseline as well as more extensive benchmarks on a set of application cores with different communication characteristics, on the 64-node Fujitsu AP1000 MPP. Benchmarks show that OMPI improves execution efficiency by as much as factor of two for communication-intensive application core with minimal code increase. It also performs significantly better than previous dynamic optimization technique.
Keywords :
MPI; SUIF; communication optimization; message passing; parallel computing; partial evaluation; Application software; Communication standards; Message passing; Parallel languages; Parallel processing; Power engineering and energy; Power engineering computing; Runtime library; Software libraries; Writing; MPI; SUIF; communication optimization; message passing; parallel computing; partial evaluation;
Conference_Titel :
Supercomputing, 1996. Proceedings of the 1996 ACM/IEEE Conference on
Print_ISBN :
0-89791-854-1
DOI :
10.1109/SUPERC.1996.183539