DocumentCode :
3294362
Title :
Design of parallel hardware neural network systems from custom analog VLSI ´building block´ chips
Author :
Eberhardt, Silvio ; Duong, Tuan ; Thakoor, Anil
Author_Institution :
Jet Propulsion Lab., California Inst. of Technol., Pasadena, CA, USA
fYear :
1989
fDate :
0-0 1989
Firstpage :
183
Abstract :
Hardware to implement feedforward neural networks has been developed for the evaluation of learning algorithms and prototyping of applications. To allow the construction of networks with arbitrary architectures, CMOS VLSI building-block components (e.g. arrays of neurons and synapses) have been designed. These can be cascaded to form networks with hundreds of neurons per layer. A 64-channel multiplexer input neuron chip serves to buffer stored charges for injection into the first synaptic layer. A 32*32 synapse chip design uses multiplier circuits to generate a conductance from stored analog charges representing weights. A 32-channel variable-gain neuron chip applies an adjustable-gain sigmoidal activation function to the sum of currents from the previous synaptic layer. Learning is performed by a host computer that can download weights and inputs onto the feedforward hardware, and read resultant network outputs. Weights and input values are stored as charges on on-chip capacitors; these are serially and invisibly refreshed by off-chip circuits that convert values stored in digital memory into analog signals.<>
Keywords :
CMOS integrated circuits; VLSI; analogue computer circuits; application specific integrated circuits; hybrid computers; learning systems; neural nets; parallel architectures; CMOS VLSI; arbitrary architectures; building-block components; custom analog VLSI; feedforward hardware; learning algorithms; multiplexer input neuron chip; multiplier circuits; neural network systems; neurons; on-chip capacitors; parallel hardware; sigmoidal activation function; stored analog charges; synapse chip design; variable-gain neuron chip; Analog circuits; Application specific integrated circuits; CMOS integrated circuits; Learning systems; Neural networks; Parallel architectures; Very-large-scale integration;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1989. IJCNN., International Joint Conference on
Conference_Location :
Washington, DC, USA
Type :
conf
DOI :
10.1109/IJCNN.1989.118697
Filename :
118697
Link To Document :
بازگشت