DocumentCode :
3499288
Title :
Analysis and improvement of multiple optimal learning factors for feed-forward networks
Author :
Jesudhas, Praveen ; Manry, Michael T. ; Rawat, Rohit ; Malalur, Sanjeev
Author_Institution :
Univ. of Texas at Arlington, Arlington, TX, USA
fYear :
2011
fDate :
July 31 2011-Aug. 5 2011
Firstpage :
2593
Lastpage :
2600
Abstract :
The effects of transforming the net function vector in the multilayer perceptron are analyzed. The use of optimal diagonal transformation matrices on the net function vector is proved to be equivalent to training the network using multiple optimal learning factors (MOLF). A method for linearly compressing large ill-conditioned MOLF Hessian matrices into smaller well-conditioned ones is developed. This compression approach is shown to be equivalent to using several hidden units per learning factor. The technique is extended to large networks. In simulations, the proposed algorithm performs almost as well as the Levenberg Marquardt algorithm with the computational complexity of a first order training algorithm.
Keywords :
Hessian matrices; learning (artificial intelligence); multilayer perceptrons; Levenberg Marquardt algorithm; MOLF Hessian matrices; compression approach; computational complexity; feed-forward networks; first order training algorithm; multilayer perceptron; multiple optimal learning factors; net function vector transformation; optimal diagonal transformation matrices; Algorithm design and analysis; Classification algorithms; Equations; Mathematical model; Multilayer perceptrons; Training; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2011 International Joint Conference on
Conference_Location :
San Jose, CA
ISSN :
2161-4393
Print_ISBN :
978-1-4244-9635-8
Type :
conf
DOI :
10.1109/IJCNN.2011.6033557
Filename :
6033557
Link To Document :
بازگشت