Title :
Training algorithm based on Newton´s method with dynamic error control
Author :
Huang, S.J. ; Koh, S.N. ; Tang, H.K.
Author_Institution :
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore
Abstract :
The use of Newton´s method with dynamic error control as a training algorithm for the backpropagation (BP) neural network is considered. Theoretically, it can be proved that Newton´s method is convergent in the second-order while the most widely used steepest-descent method is convergent in the first-order. This suggests that Newton´s method might be a faster algorithm for the BP network. The updating equations of the two methods are analyzed in detail to extract some important properties with reference to the error surface characteristics. The common benchmark XOR problem is used to compare the performance of the methods
Keywords :
backpropagation; neural nets; Newton´s method; backpropagation; common benchmark XOR problem; dynamic error control; error surface characteristics; neural network; performance; steepest-descent method; training algorithm; updating equations; Algorithm design and analysis; Backpropagation algorithms; Computer networks; Equations; Error correction; Multi-layer neural network; Neural networks; Neurons; Newton method; Shape;
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
DOI :
10.1109/IJCNN.1992.227085