DocumentCode :
1543276
Title :
Convergence properties and stationary points of a perceptron learning algorithm
Author :
Shynk, John J. ; Roy, Sumit
Author_Institution :
Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
Volume :
78
Issue :
10
fYear :
1990
fDate :
10/1/1990 12:00:00 AM
Firstpage :
1599
Lastpage :
1604
Abstract :
An analysis of the stationary (convergence) points of an adaptive algorithm that adjusts the perceptron weights is presented. This algorithm is identical in form to the least-mean-square (LMS) algorithm, except that a hard limiter is incorporated at the output of the summer. The algorithm is described in detail, a simple two-input example is presented, and some of its convergence properties are illustrated. When the input of the perceptron is a Gaussian random vector, the stationary points of the algorithm are not unique and they depend on the algorithm step size and the momentum constant. The stationary points of the algorithm are presented, and the properties of the adaptive weight vector near convergence are discussed. Computer simulations that verify the analysis are given
Keywords :
adaptive systems; convergence of numerical methods; learning systems; neural nets; Gaussian random vector; adaptive algorithm; convergence; least mean square algorithm; neural networks; perceptron learning algorithm; stationary points; Adaptive algorithm; Algorithm design and analysis; Convergence; Feedforward neural networks; Least squares approximation; Multi-layer neural network; Multilayer perceptrons; Neural networks; Neurons; Pattern recognition;
fLanguage :
English
Journal_Title :
Proceedings of the IEEE
Publisher :
ieee
ISSN :
0018-9219
Type :
jour
DOI :
10.1109/5.58345
Filename :
58345
Link To Document :
بازگشت