DocumentCode :
1906509
Title :
A new acceleration technique for the backpropagation algorithm
Author :
Yu, Xiangui ; Loh, Nan K. ; Miller, William C.
Author_Institution :
Dept. of Electr. Eng., Windsor Univ., Ont., Canada
fYear :
1993
fDate :
1993
Firstpage :
1157
Abstract :
An adaptive momentum algorithm which can update the momentum coefficient automatically in every iteration step is presented. The basic idea comes from the optimal gradient method. It is very difficult to obtain the optimal gradient vector by analytical methods, but it can be proven that the optimal gradient vectors in two successive iteration steps are orthogonal. Based on this property, one can use the Gram-Schmidt orthogonalization method to ensure the orthogonality of the successive gradient vectors. The result of this process is equivalent to adding a momentum term to the standard backpropagation algorithm. The momentum coefficient is updated automatically in every iteration. Numerical simulations show that the adaptive momentum algorithm can eliminate possible divergent oscillations during the initial training, and can also accelerate the learning process and result in a lower error when the final convergence is reached
Keywords :
backpropagation; iterative methods; neural nets; Gram-Schmidt orthogonalization; adaptive momentum algorithm; backpropagation; convergence; gradient vectors; learning process; neural nets; optimal gradient method; Acceleration; Backpropagation algorithms; Convergence of numerical methods; Feedforward neural networks; Filters; Gradient methods; Multi-layer neural network; Neural networks; Numerical simulation; Read only memory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
Type :
conf
DOI :
10.1109/ICNN.1993.298720
Filename :
298720
Link To Document :
بازگشت