Title :
Stochastic competitive learning
Author_Institution :
Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA, USA
fDate :
9/1/1991 12:00:00 AM
Abstract :
Competitive learning systems are examined as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher order statistics
Keywords :
Lyapunov methods; learning systems; neural nets; probability; stochastic processes; Lyapunov argument; adaptive vector quantizers; centroids; competitive learning; competitive synaptic vectors; neural networks; pattern space; probability density; stochastic dynamical systems; Convergence; Fans; Feedforward neural networks; Higher order statistics; Learning systems; Multi-layer neural network; Neural networks; Neurons; Probability density function; Stochastic processes;
Journal_Title :
Neural Networks, IEEE Transactions on