DocumentCode :
324538
Title :
An accelerated recurrent network training algorithm
Author :
Atiya, Amir ; Parlos, Alexander
Author_Institution :
Dept. of Electr. Eng., California Inst. of Technol., Pasadena, CA, USA
Volume :
2
fYear :
1998
fDate :
4-9 May 1998
Firstpage :
1101
Abstract :
There have been extensive efforts to develop training algorithms for recurrent neural networks. A variety of algorithms have been developed, but still recurrent network training is plagued by slow convergence. The goal of this paper is to develop a new algorithm that is based on approximating the direction of the error gradient. The new algorithm has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. Typically, it reaches the minimum within only about 5 or 10 iterations, compared to around a 1000 iterations or so for the competing techniques
Keywords :
computational complexity; convergence; learning (artificial intelligence); recurrent neural nets; accelerated recurrent network training algorithm; computational complexity; convergence; error gradient direction approximation; error minimum; recurrent neural networks; Acceleration; Computer networks; Control systems; Convergence; Green´s function methods; Nonlinear dynamical systems; Nonlinear equations; Recurrent neural networks; Signal processing; Signal processing algorithms;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
Conference_Location :
Anchorage, AK
ISSN :
1098-7576
Print_ISBN :
0-7803-4859-1
Type :
conf
DOI :
10.1109/IJCNN.1998.685926
Filename :
685926
Link To Document :
بازگشت