Title :
Faster matrix-vector multiplication on GeForce 8800GTX
Author :
Fujimoto, Noriyuki
Author_Institution :
Grad. Sch. of Inf. Sci. & Technol., Osaka Univ., Osaka
Abstract :
Recently a GPU has acquired programmability to perform general purpose computation fast by running ten thousands of threads concurrently. This paper presents a new algorithm for dense matrix-vector multiplication on NVIDIA CUDA architecture. The experimental results on GeForce 8800GTX show that the proposed algorithm runs maximum 15.69 (resp., 32.88) times faster than the sgemv routine in NVIDIA´s BIAS library CUBLAS 1.1 (resp., Intel Math Kernel Library 9.1 on one-core of 2.0 GHz Intel Xeon E5335 CPU with SSE3 SIMD instructions) for matrices with order 16 to 12800. The performance, including the data transfer between CPU and GPU, of Jacobi´s iterative method for solving linear equations shows that the proposed algorithm is practical for some real applications.
Keywords :
digital signal processing chips; matrix multiplication; GPU programmability; GeForce 8800GTX; Intel Math Kernel Library; Intel Xeon E5335 CPU; Jacobi iterative method; NVIDIA BIAS library CUBLAS; NVIDIA CUDA architecture; SSE3 SIMD instructions; data transfer; linear equations; matrix-vector multiplication; Computer architecture; Concurrent computing; Equations; Iterative methods; Jacobian matrices; Kernel; Libraries; Read-write memory; Registers; Yarn;
Conference_Titel :
Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on
Conference_Location :
Miami, FL
Print_ISBN :
978-1-4244-1693-6
Electronic_ISBN :
1530-2075
DOI :
10.1109/IPDPS.2008.4536350