Title :
A Study of Data Interlock in Computational Networks for Sparse Matrix Multiplication
Author_Institution :
Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260.
Abstract :
The general question addressed in this study is: are regular networks suitable for sparse matrix computations? More specifically, we consider a special purpose self-timed computational array that is designed for a specific dense matrix computation. We add to each cell in the network the capability of recognizing and skipping operations that involve zero operands, and then ask how efficient is this resulting network for sparse matrix computation? In order to answer this question, it is necessary to study the effect of data interlock on the performance of self-timed networks. For this, the class of pseudosystolic networks is introduced as a hybrid class between systolic and self-timed networks. Networks in this class are easy to analyze, and provide a means for the study of the worst case performance of self-timed networks. The well known concept of computation fronts is also generalized to include irregular flow of data, and a technique based on the propagation of such computation fronts is suggested for the estimation of the processing time and the communication time of pseudosystolic networks.
Keywords :
Automatic testing; Circuit faults; Computer networks; Logic design; Logic testing; Notice of Violation; Programmable logic arrays; Sparse matrices; System testing; Very large scale integration; Computation fronts; computational networks; data interlock; performance evaluation; pseudosystolic networks; self-timed systems; sparse matrices; systolic arrays;
Journal_Title :
Computers, IEEE Transactions on
DOI :
10.1109/TC.1987.5009541