DocumentCode :
296026
Title :
Linear neural network learning algorithm analysis
Author :
Yin, Hongfeng ; Klasa, Stan
Author_Institution :
Dept. of Comput. Sci., Concordia Univ., Montreal, Que., Canada
Volume :
5
fYear :
1995
fDate :
Nov/Dec 1995
Firstpage :
2847
Abstract :
An unsupervised perceptron algorithm and several generalizations are presented in this paper. Based on stochastic approximation theory, some general analysis of neural network learning algorithms is provided. Also, the definitions of convergence speed and robustness of a learning algorithm are given. It is shown that the unsupervised perceptron algorithms converge to the principal component of the input data under some conditions. In addition, the convergence speeds and robustness of the unsupervised perceptrons, the Oja (1982, 1983) and the Widrow-Hoff algorithms are given in explicit forms
Keywords :
approximation theory; generalisation (artificial intelligence); perceptrons; unsupervised learning; convergence speed; generalizations; linear neural network learning algorithm analysis; robustness; stochastic approximation theory; unsupervised perceptron algorithm; Algorithm design and analysis; Approximation algorithms; Approximation methods; Computer science; Convergence; Difference equations; Differential equations; Neural networks; Robustness; Stochastic processes;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1995. Proceedings., IEEE International Conference on
Conference_Location :
Perth, WA
Print_ISBN :
0-7803-2768-3
Type :
conf
DOI :
10.1109/ICNN.1995.488185
Filename :
488185
Link To Document :
بازگشت