DocumentCode
295983
Title
A class of simple nonlinear 1-unit PCA neural networks
Author
Peper, Ferdinand ; Noda, Hideki
Author_Institution
Commun. Res. Lab., Minist. of Posts & Telecommun., Kobe, Japan
Volume
1
fYear
1995
fDate
Nov/Dec 1995
Firstpage
285
Abstract
This paper proposes a class of principal component analysis (PCA) neural networks that have a nonlinear input-output relationship and learn the first principal component in the input data. Each member of the class is characterized by a parameter p in the range (-1, 1) and trains its weight vector w by the learning rule: Δw=γ [x.sign(xTw) |xTw|p-w], where γ is the gain factor and x is the input vector. The loss-term, -w, is much simpler than the typical feedback loss-terms of other 1-unit PCA neural networks in literature and still prevents the weight vector length from growing out of bound. The authors characterize solutions to which the neural networks converge mathematically, and confirm convergence to these solutions by simulation
Keywords
convergence; learning (artificial intelligence); neural nets; statistical analysis; convergence; gain factor; learning rule; nonlinear 1-unit PCA neural networks; nonlinear input-output relationship; weight vector length; Computer networks; Electronic mail; Image coding; Image converters; Neural networks; Neurofeedback; Neurons; Principal component analysis; Signal processing; Statistics;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 1995. Proceedings., IEEE International Conference on
Conference_Location
Perth, WA
Print_ISBN
0-7803-2768-3
Type
conf
DOI
10.1109/ICNN.1995.488110
Filename
488110
Link To Document