Title :
MPI Collectives on Modern Multicore Clusters: Performance Optimizations and Communication Characteristics
Author :
Mamidala, Amith R. ; Kumar, Rahul ; De, Debraj ; Panda, D.K.
Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH
Abstract :
The advances in multicore technology and modern interconnects is rapidly accelerating the number of cores deployed in today´s commodity clusters. A majority of parallel applications written in MPI employ collective operations in their communication kernels. Optimization of these operations on the multicore platforms is the key to obtaining good performance speed-ups. However, designing these operations on the modern multicores is a non-trivial task. Modern multicores such as Intel´s Clovertown and AMD´s Opteron feature various architectural attributes resulting in interesting ramifications. For example, Clovertown deploys shared 12 caches for a pair of cores whereas in Opteron, 12 caches are exclusive to a core. Understanding the impact of these architectures on communication performance is crucial to designing efficient collective algorithms. In this paper, we systematically evaluate these architectures and use these insights to develop efficient collective operations such as MPI_Bcast, MPI_Allgather, MPI_Allreduce and MPI_Alltoall. Further, we characterize the behavior of these collective algorithms on multicores especially when concurrent network and intra-node communications occur. We also evaluate the benefits of the proposed intra-node MPI_Allreduce over Opteron multicores and compare it with Intel Clovertown systems. The optimizations proposed in this paper reduce the latency of MPI-Bcast and MPI_Allgather by 1.9 and 4.0 times, respectively on 512 cores. For MPI_Allreduce, our optimizations improve the performance by as much as 33% on the multicores. Further, we observe up to three times improvement in performance for matrix multiplication benchmark on 512 cores.
Keywords :
application program interfaces; concurrency control; message passing; multiprocessing systems; parallel processing; performance evaluation; workstation clusters; MPI collective; communication kernel; concurrent network; intra-node communication; message passing interface; modern multicore cluster; parallel application; performance optimization; Algorithm design and analysis; Bandwidth; Delay; High performance computing; Kernel; Multicore processing; Optimization; Parallel algorithms; Process design; Sun; MPI Collectives; Multicore;
Conference_Titel :
Cluster Computing and the Grid, 2008. CCGRID '08. 8th IEEE International Symposium on
Conference_Location :
Lyon
Print_ISBN :
978-0-7695-3156-4
Electronic_ISBN :
978-0-7695-3156-4
DOI :
10.1109/CCGRID.2008.87