DocumentCode :
303362
Title :
Efficient estimation of dynamically optimal learning rate using higher order derivatives
Author :
Yu, Xiao-Hu
Author_Institution :
Dept. of Radio Eng., Southeast Univ., Nanjing, China
Volume :
2
fYear :
1996
fDate :
3-6 Jun 1996
Firstpage :
1251
Abstract :
Efficient estimation of the dynamically optimal learning rate is a critical problem in backpropagation learning. In this paper, a higher-order method for efficiently estimating the dynamically optimal learning rate is established, which explores the first four derivative information gathered from an extended feedforward propagation procedure. The near-optimal learning rate for each iteration is obtained with a moderate increase in computational and storage burden which remains the same scale as the standard backpropagation algorithm. Extensive computer simulations provided in this paper indicate that the present higher-order method can provide rapid convergence and very significant gains in running time savings
Keywords :
backpropagation; convergence; backpropagation learning; dynamically optimal learning rate; extended feedforward propagation procedure; higher order derivatives; near-optimal learning rate; rapid convergence; running time savings; Acceleration; Application software; Artificial neural networks; Backpropagation algorithms; Computer simulation; Convergence; Cost function; Multi-layer neural network; Neurons; Recursive estimation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
Type :
conf
DOI :
10.1109/ICNN.1996.549077
Filename :
549077
Link To Document :
بازگشت