DocumentCode
750753
Title
Evaluating InfiniBand performance with PCI Express
Author
Jiuxing Liu ; Mamidala, A. ; Vishnu, A. ; Panda, D.K.
Author_Institution
IBM Thomas J. Watson Res. Center, NY, USA
Volume
25
Issue
1
fYear
2005
Firstpage
20
Lastpage
29
Abstract
The InfiniBand architecture is an industry standard that offers low latency and high bandwidth as well as advanced features such as remote direct memory access (RDMA), atomic operations, multicast, and quality of service. InfiniBand products can achieve a latency of several microseconds for small messages and a bandwidth of 700 to 900 Mbytes/s. As a result, it is becoming increasingly popular as a high-speed interconnect technology for building high-performance clusters. The Peripheral Component Interconnect (PCI) has been the standard local-I/O-bus technology for the last 10 years. However, more applications require lower latency and higher bandwidth than what a PCI bus can provide. As an extension, PCI-X offers higher peak performance and efficiency. InfiniBand host channel adapters (HCAs) with PCI Express achieve 20 to 30 percent lower latency for small messages compared with HCAs using 64-bit, 133-MHz PCI-X interfaces. PCI Express also improves performance at the MPI level, achieving a latency of 4.1μs for small messages. It can also improve MPI collective communication and bandwidth-bound MPI application performance.
Keywords
application program interfaces; message passing; performance evaluation; peripheral interfaces; InfiniBand architecture; MPI; PCI; high-speed interconnect technology; host channel adapters; message passing; performance evaluation; peripheral component interconnect; quality of service; remote direct memory access; system buses; Aggregates; Application software; Bandwidth; Communication system control; Delay; LAN interconnection; Message passing; Quality of service; Testing; World Wide Web;
fLanguage
English
Journal_Title
Micro, IEEE
Publisher
ieee
ISSN
0272-1732
Type
jour
DOI
10.1109/MM.2005.9
Filename
1411713
Link To Document