DocumentCode :
2505374
Title :
MPI performance on the SGI Power Challenge
Author :
Loos, Tom ; Bramley, Randall
fYear :
1996
fDate :
1-2 Jul 1996
Firstpage :
203
Lastpage :
206
Abstract :
The widely implemented MPI standard defines primitives for point-to-point and collective inter-processor communication (IPC), and synchronization based on message passing. The main reason to use a message passing standard is to ease the development, porting, and execution of applications on the variety of parallel computers that can support the paradigm, including shared memory, distributed memory, and shared memory array multiprocessors. The paper concentrates on the SGI Power Challenge, a shared memory multiprocessor with comparison results provided for the distributed memory Intel Paragon. Memory and communications tests written in C++ using messages of double precision arrays show both memory and MPI blocking IPC performance on the Power Challenge degrade once total message sizes grow larger than the second level cache. Comparing the MPI and memory performance curves indicate Power Challenge native MPI point-to-point communication is implemented using memory copying. A model of blocking IPC for the SGI Power Challenge was developed and validated with performance results for use as part of a cost function in a graph partitioning algorithm. A new measure of communications efficiency and overhead, the ratio of IPC time to memory copy time, is used to compare relative IPC performance. Comparison between the Power Challenge and the Paragon show that the Paragon is more efficient for small messages, but the Power Challenge is better on large messages. Power Challenge observations do not correspond well with Paragon results, indicating shared memory multiprocessor results should not be used to predict distributed memory multiprocessor performance. This suggests that relative performance of parallel algorithms should not judged based on one type of machine
Keywords :
application program interfaces; message passing; parallel algorithms; performance evaluation; shared memory systems; software performance evaluation; synchronisation; utility programs; MPI performance; MPI performance curves; MPI standard; SGI Power Challenge; collective inter-processor communication; communications efficiency; communications overhead; communications tests; cost function; double precision arrays; graph partitioning algorithm; memory copying; memory performance curves; memory tests; message passing; parallel computers; point-to-point inter-processor communication; primitives; second level cache; shared memory multiprocessor; synchronization; total message sizes; Application software; Communication standards; Concurrent computing; Cost function; Degradation; Distributed computing; Message passing; Partitioning algorithms; Standards development; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
MPI Developer's Conference, 1996. Proceedings., Second
Conference_Location :
Notre Dame, IN
Print_ISBN :
0-8186-7533-0
Type :
conf
DOI :
10.1109/MPIDC.1996.534116
Filename :
534116
Link To Document :
بازگشت