Abstract :
In this paper, learning rules for a two-layered network consisting of N input units and M output units, with full connections between the two layers and full lateral connections between the output units, are proposed. The learning rules extract the principal components from a given input data set, i.e. the weight vectors of the network converge to the eigenvectors belonging to the M largest eigenvalues of the covariance matrix of the input. Simulation results are presented to illustrate the convergence behaviour of the network. Among the issues for further research are a detailed mathematical analysis of the properties of the learning rules, the use of the network for feature extraction in pattern recognition applications, and an investigation of other learning architectures for principal component extraction