Title :
Circuit implementation of a trainable neural network using the generalized Hebbian algorithm with supervised techniques
Author :
Hasler, Paul ; Akers, Lex
Author_Institution :
Center for Solid State Electron. Res., Arizona State Univ., Tempe, AZ, USA
Abstract :
An efficient hardware implementation of a training algorithm is presented, combining both supervised and unsupervised techniques. In VLSI circuits effects including random offsets and mismatch, system distortion, frequency response, and temperature deviations perturb the system outputs. Analysis of the generalized Hebbian algorithm (GHA) shows that these small deviations result in small perturbations of the output statistics and the weight matrix. Also, the conjugate-gradient optimization algorithm is formulated in continuous-time. Both the GHA and the optimization system can be efficiently implemented on a mesh of synapses. The low distortion and high bandwidth (>100 MHz) for the matrix operations and a high-performance analog memory indicate high performance for the GHA. From the mathematical theory, the error of the output statistics is less than 1%
Keywords :
Hebbian learning; VLSI; analogue storage; conjugate gradient methods; neural chips; optimisation; VLSI circuits; analog memory; conjugate-gradient optimization algorithm; frequency response; generalized Hebbian algorithm; mesh of synapses; mismatch; output statistics; random offsets; supervised; system distortion; temperature deviations; trainable neural network; training algorithm; unsupervised; weight matrix; Algorithm design and analysis; Analog memory; Bandwidth; Circuits; Frequency response; Hardware; Neural networks; Statistical analysis; Temperature; Very large scale integration;
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
DOI :
10.1109/IJCNN.1992.287142