Title :
Principal component extraction using recursive least squares learning
Author :
Bannour, Sami ; Azimi-Sadjadi, Mahmood R.
Author_Institution :
Dept. of Electr. Eng., Colorado State Univ., Fort Collins, CO, USA
fDate :
3/1/1995 12:00:00 AM
Abstract :
A new neural network-based approach is introduced for recursive computation of the principal components of a stationary vector stochastic process. The neurons of a single-layer network are sequentially trained using a recursive least squares squares (RLS) type algorithm to extract the principal components of the input process. The optimality criterion is based on retaining the maximum information contained in the input sequence so as to be able to reconstruct the network inputs from the corresponding outputs with minimum mean squared error. The proof of the convergence of the weight vectors to the principal eigenvectors is also established. A simulation example is given to show the accuracy and speed advantages of this algorithm in comparison with the existing methods. Finally, the application of this learning algorithm to image data reduction and filtering of images degraded by additive and/or multiplicative noise is considered
Keywords :
data reduction; filtering theory; learning (artificial intelligence); least squares approximations; neural nets; stochastic processes; accuracy; additive noise; convergence proof; degraded image filtering; image data reduction; maximum information retention; minimum mean squared error; multiplicative noise; network input reconstruction; optimality criterion; principal component extraction; principal eigenvectors; recursive least squares learning; sequential training; simulation; single-layer neural network; speed advantages; stationary vector stochastic process; weight vectors; Additive noise; Computer networks; Convergence; Data mining; Image reconstruction; Least squares methods; Neural networks; Neurons; Resonance light scattering; Stochastic processes;
Journal_Title :
Neural Networks, IEEE Transactions on