DocumentCode :
2662102
Title :
Learning, function approximation and analog VLSI
Author :
Schwartz, D.B. ; Samalam, V.K.
Author_Institution :
GTE Lab. Inc., Waltham, MA, USA
fYear :
1990
fDate :
1-3 May 1990
Firstpage :
2441
Abstract :
Multilayered perceptrons have been applied to an assortment of problems on low-dimensional spaces where the inputs are spread uniformly across the space. Although there have been some notable successes, learning is typically extremely slow. A number of different approaches based on an input layer of units with localized receptive fields and an output layer of adaptive weights have been proposed. Two rather different architectures suggest the need to tailor the type of network to the problem at hand. For this reason, two chips (both with on-chip learning) that represent opposite ends of a spectrum representing degrees of structural definition are presented. The first, with a single unit and 24 weights, is designed to be a building block for multilayered perceptrons. The second is a complete circuit whose input is a continuous analog signal and whose output is a continuous, arbitrary function of its input
Keywords :
VLSI; analogue circuits; learning systems; neural nets; adaptive weights; analog VLSI; continuous analog signal; function approximation; input layer; localized receptive fields; low-dimensional spaces; multilayered perceptrons; on-chip learning; output layer; structural definition; Buildings; Circuits; Computational modeling; Computer networks; Function approximation; Laboratories; Multilayer perceptrons; Neural network hardware; Neural networks; Very large scale integration;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Circuits and Systems, 1990., IEEE International Symposium on
Conference_Location :
New Orleans, LA
Type :
conf
DOI :
10.1109/ISCAS.1990.112504
Filename :
112504
Link To Document :
بازگشت