DocumentCode :
1132164
Title :
Stochastic competitive learning
Author :
Kosko, Bart
Author_Institution :
Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA, USA
Volume :
2
Issue :
5
fYear :
1991
fDate :
9/1/1991 12:00:00 AM
Firstpage :
522
Lastpage :
529
Abstract :
Competitive learning systems are examined as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher order statistics
Keywords :
Lyapunov methods; learning systems; neural nets; probability; stochastic processes; Lyapunov argument; adaptive vector quantizers; centroids; competitive learning; competitive synaptic vectors; neural networks; pattern space; probability density; stochastic dynamical systems; Convergence; Fans; Feedforward neural networks; Higher order statistics; Learning systems; Multi-layer neural network; Neural networks; Neurons; Probability density function; Stochastic processes;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.134289
Filename :
134289
Link To Document :
بازگشت