DocumentCode
1920086
Title
Poster: An MPI Library implementing Direct Communication for Many-Core Based Accelerators
Author
Si, Min ; Ishikawa, Yutaka
fYear
2012
fDate
10-16 Nov. 2012
Firstpage
1529
Lastpage
1529
Abstract
DCFA-MPI is an MPI library implementation for manycore-based clusters, whose compute node consists of Intel MIC (Many Integrated Core) connected to the host via PCI Express with InfiniBand. DCFA-MPI enables direct data transfer between MIC units without the host assist. MPI_Init and MPI_Finalize functions are offloaded to the host side in order to initialize the InfiniBand HCA and inform its PCI-Express address to MIC. MPI communication primitives executed in MIC may transfer data directly to other MICs or hosts by issuing commands to HCA. The implementation is based on the Mellanox InfiniBand HCA and Intel´s Knights Ferry, and compared with the Intel MPI + offload mode. Preliminary results show that DCFA-MPI outperforms the Intel MPI + offload mode by 1.12 ~ 5 times.
Keywords
accelerator; coprocessor; direct communication; infiniband; many-core; mpi; xeon phi;
fLanguage
English
Publisher
ieee
Conference_Titel
High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:
Conference_Location
Salt Lake City, UT
Print_ISBN
978-1-4673-6218-4
Type
conf
DOI
10.1109/SC.Companion.2012.305
Filename
6496089
Link To Document