DocumentCode :
1684680
Title :
Avoiding communication in sparse matrix computations
Author :
Demmel, James ; Hoemmen, Mark ; Mohiyuddin, Marghoob ; Yelick, Katherine
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., Univ. of California at Berkeley, Berkeley, CA
fYear :
2008
Firstpage :
1
Lastpage :
12
Abstract :
The performance of sparse iterative solvers is typically limited by sparse matrix-vector multiplication, which is itself limited by memory system and network performance. As the gap between computation and communication speed continues to widen, these traditional sparse methods will suffer. In this paper we focus on an alternative building block for sparse iterative solvers, the "matrix powers kernel" [x, Ax, A2x, ..., Akx], and show that by organizing computations around this kernel, we can achieve near-minimal communication costs. We consider communication very broadly as both network communication in parallel code and memory hierarchy access in sequential code. In particular, we introduce a parallel algorithm for which the number of messages (total latency cost) is independent of the power k, and a sequential algorithm, that reduces both the number and volume of accesses, so that it is independent of k in both latency and bandwidth costs. This is part of a larger project to develop "communication-avoiding Krylov subspace methods," which also addresses the numerical issues associated with these methods. Our algorithms work for general sparse matrices that "partition well". We introduce parallel performance models of matrices arising from 2D and 3D problems and show predicted speedups over a conventional algorithm of up to 7times on a petaflop-scale machine and up to 22times on computation across the grid. Analogous sequential performance models of the same problems predict speedups over a conventional algorithm of up to 10times on an out-of-core implementation, and up to 2.5times when we use our ideas to reduce off-chip latency and bandwidth to DRAM. Finally, we validate the model on an out-of-core sequential implementation and measured a speedup of over 3times, which is close to the predicted speedup.
Keywords :
grid computing; iterative methods; mathematics computing; matrix multiplication; parallel processing; sparse matrices; vectors; DRAM; communication-avoiding Krylov subspace methods; grid; memory hierarchy access; memory system; network performance; parallel algorithm; parallel code; petaflop-scale machine; sequential algorithm; sequential code; sparse iterative solvers; sparse matrix-vector multiplication; Bandwidth; Concurrent computing; Costs; Delay; Kernel; Organizing; Parallel algorithms; Partitioning algorithms; Predictive models; Sparse matrices;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on
Conference_Location :
Miami, FL
ISSN :
1530-2075
Print_ISBN :
978-1-4244-1693-6
Electronic_ISBN :
1530-2075
Type :
conf
DOI :
10.1109/IPDPS.2008.4536305
Filename :
4536305
Link To Document :
بازگشت