DocumentCode
2355154
Title
Parallel matrix transpose algorithms on distributed memory concurrent computers
Author
Choi, Jaeyoung ; Dongarra, Jack J. ; Walker, David W.
Author_Institution
Math. Sci. Sect., Oak Ridge Nat. Lab., TN, USA
fYear
1993
fDate
6-8 Oct 1993
Firstpage
245
Lastpage
252
Abstract
This paper describes parallel matrix transpose algorithms on distributed memory concurrent processors. We assume that the matrix is distributed over a P×Q processor template with a block scattered data distribution. P, Q, and the block size can be arbitrary, so the algorithms have wide applicability. The algorithms make use of non-blocking, point-to-point communication between processors. The use of nonblocking communication allows a processor to overlap the messages that it sends to different processors, thereby avoiding unnecessary synchronization. Combined with the matrix multiplication routine, C=A·B, the algorithms are used to compute parallel multiplications of transposed matrices, C=AT·BT , in the PUMMA package. Details of the parallel implementation of the algorithms are given, and results are presented for runs on the Intel Touchstone Delta computer
Keywords
distributed memory systems; mathematics computing; matrix algebra; parallel algorithms; synchronisation; Intel Touchstone Delta computer; PUMMA package; block scattered data distribution; distributed memory concurrent computers; matrix multiplication routine; parallel matrix transpose algorithms; point-to-point communication; transposed matrices; Application software; Computer architecture; Concurrent computing; Distributed computing; Laboratories; Lifting equipment; Linear algebra; Matrix decomposition; Packaging; Scattering;
fLanguage
English
Publisher
ieee
Conference_Titel
Scalable Parallel Libraries Conference, 1993., Proceedings of the
Conference_Location
Mississippi State, MS
Print_ISBN
0-8186-4980-1
Type
conf
DOI
10.1109/SPLC.1993.365559
Filename
365559
Link To Document