DocumentCode
2707628
Title
The multi-phase method in fast learning algorithms
Author
Cheung, Chi-Chung ; Ng, Sin-Chun
Author_Institution
Dept. of Electron. & Inf. Eng., Hong Kong Polytech. Univ., Hong Kong, China
fYear
2009
fDate
14-19 June 2009
Firstpage
552
Lastpage
559
Abstract
Backpropagation (BP) learning algorithm is the most widely supervised learning technique which is extensively applied in the training of multi-layer feed-forward neural networks. Many modifications that have been proposed to improve the performance of BP have focused on solving the ldquoflat spotrdquo problem to increase the convergence rate. However, their performance is limited due to the error overshooting problem. A novel approach called BP with two-phase magnified gradient function (2P-MGFPROP) was introduced to overcome the error overshooting problem and hence speed up the convergence rate of MGFPROP. In this paper, this approach is further enhanced by proposing to divide the learning process into multiple phases, and different fast learning algorithms are assigned in different phases to improve the convergence rate in different adaptive problems. Through the performance investigation, it is found that the convergence rate can be increased up to two times, compared with existing fast learning algorithms.
Keywords
backpropagation; feedforward neural nets; gradient methods; backpropagation learning algorithm; convergence rate; error overshooting problem; fast learning algorithms; multilayer feedforward neural networks; multiphase method; supervised learning technique; two-phase magnified gradient function; Backpropagation algorithms; Computer networks; Convergence; Equations; Feedforward systems; Intelligent networks;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 2009. IJCNN 2009. International Joint Conference on
Conference_Location
Atlanta, GA
ISSN
1098-7576
Print_ISBN
978-1-4244-3548-7
Electronic_ISBN
1098-7576
Type
conf
DOI
10.1109/IJCNN.2009.5178684
Filename
5178684
Link To Document