Title :
Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition
Author :
Ravindran, Niranjay ; Sidiropoulos, Nicholas D. ; Smith, Shaden ; Karypis, George
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Minnesota, Minneapolis, MN, USA
Abstract :
Low-rank tensor decomposition has many applications in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity.
Keywords :
Big Data; data analysis; mathematics computing; matrix decomposition; parallel algorithms; tensors; big data analysis; big tensor decomposition; compressed replicas; low-rank tensor decomposition; machine learning; matrix products; memory-efficient parallel computation; signal processing; tensor products; Complexity theory; Explosions; Instruction sets; Least squares approximations; Matrix decomposition; Memory management; Tensile stress;
Conference_Titel :
Signals, Systems and Computers, 2014 48th Asilomar Conference on
Print_ISBN :
978-1-4799-8295-0
DOI :
10.1109/ACSSC.2014.7094512