Title :
Parallel architectures for artificial neural nets
Author :
Kung, S.Y. ; Hwang, J.N.
Author_Institution :
Dept. of Electr. Eng., Princeton Univ., NJ, USA
Abstract :
The authors advocate digital VLSI architectures for implementing a wide variety of artificial neural nets (ANNs). A programmable systolic array is proposed, which maximizes the strength of VLSI in terms of intensive and pipelined computing, yet circumvents its limitation on communication. The array is meant to be more general-purpose than most other ANN architectures proposed. It may be used for a variety of algorithms in both the search and learning phases of ANNs, e.g. single-layer recurrent nets (e.g. Hopfield nets) and multilayer feedforward nets (e.g. perceptron-like nets). Although design considerations for the learning phase are somewhat more involved, the proposed design can accommodate very well several key learning rules, such as Hebbian, delta, competitive, and back-propagation learning rules. Compared to analog neural circuits, the proposed systolic architecture offers higher flexibilities, higher precision, and full pipelineability.<>
Keywords :
VLSI; cellular arrays; digital integrated circuits; integrated circuit technology; learning systems; neural nets; parallel architectures; pipeline processing; Hebbian learning rule; Hopfield nets; artificial neural nets; back-propagation learning rules; competitive learning rule; delta learning rule; digital VLSI architectures; multilayer feedforward nets; parallel architectures; perceptron-like nets; pipelined computing; programmable systolic array; single-layer recurrent nets; Cellular logic arrays; Digital integrated circuits; Integrated circuit fabrication; Learning systems; Neural networks; Parallel architectures; Pipeline processing; Very-large-scale integration;
Conference_Titel :
Neural Networks, 1988., IEEE International Conference on
Conference_Location :
San Diego, CA, USA
DOI :
10.1109/ICNN.1988.23925