Title :
A fast multilayer neural-network training algorithm based on the layer-by-layer optimizing procedures
Author :
Wang, Gou-Jen ; Chen, Chih-Cheng
Author_Institution :
Dept. of Mech. Eng., Nat. Chung-Hsing Univ., Taichung, Taiwan
fDate :
5/1/1996 12:00:00 AM
Abstract :
A faster new learning algorithm to adjust the weights of the multilayer feedforward neural network is proposed. In this new algorithm, the weight matrix (W2) of the output layer and the output vector (Y) of the previous layer are treated as two variable sets. An optimal solution pair (W2*,YP*) is found to minimize the sum-square-error of the patterns input. YP* is then used as the desired output of the previous layer. The optimal weight matrix and layer output vector of the hidden layers in the network is found with the same method as that used for the output layer. In addition, the dynamic forgetting factors method makes the proposed new algorithm even more powerful in dynamic system identification. Computer simulation shows that the new algorithm outmatches other learning algorithms both in converging speed and in computation time required
Keywords :
feedforward neural nets; learning (artificial intelligence); minimisation; multilayer perceptrons; computation time; computer simulation; converging speed; dynamic forgetting factors method; dynamic system identification; fast multilayer neural-network training algorithm; layer output vector; layer-by-layer optimizing procedures; learning algorithm; multilayer feedforward neural network weight adjustment; optimal weight matrix; sum-square-error minimization; weight matrix; Backpropagation algorithms; Computer simulation; Convergence; Feedforward neural networks; Multi-layer neural network; Neural networks; Nonhomogeneous media; Power system modeling; Robustness; System identification;
Journal_Title :
Neural Networks, IEEE Transactions on