DocumentCode
489125
Title
Learning Techniques for Structured Networks
Author
Polycarpou, Marios M. ; Ioannou, Petros A.
Author_Institution
Department of Electrical Engineering-Systems, University of Southern California, Los Angeles, CA 90089-0781, U.S.A
fYear
1991
fDate
26-28 June 1991
Firstpage
2413
Lastpage
2418
Abstract
A special class of feedforward neural networks, referred to as structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. In this paper we present a convergence analysis for the training of structured networks. Since the learning techniques that are used in structured networks are the same as the ones employed in training of neural networks, the issue of convergence is discussed not only from a numerical perspective but also as a means of deriving insight into connectionist learning. In our analysis, we develop bounds on the learning rate, under which, we prove exponential convergence of the weights to their correct values for a class of matrix algebra problems that includes linear equation solving, matrix inversion and Lyapunov equation solving. For a special class of problems we introduce, what we call, the orthogonalised backpropagation algorithm, an optimal recursive update law for minimising a least-squares cost functional, that guarantees exact convergence in one epoch. Several learning issues, such as normalizing techniques, persistency of excitation, input scaling and non-unique solution sets, are investigated.
Keywords
Adaptive control; Backpropagation algorithms; Computer networks; Convergence of numerical methods; Cost function; Equations; Feedforward neural networks; Matrices; Neural networks; Termination of employment;
fLanguage
English
Publisher
ieee
Conference_Titel
American Control Conference, 1991
Conference_Location
Boston, MA, USA
Print_ISBN
0-87942-565-2
Type
conf
Filename
4791834
Link To Document