Title :
A novel weight training methodology for a multi-layer feed-forward neural net
Author :
Chitradurga, Rakesh
Author_Institution :
Genetic Algorithms Lab., Alabama Univ., Tuscaloosa, AL, USA
Abstract :
For very large data sets, when the problem of classification is dimensionally large, it is known that the present neural network weight training algorithms take a very long time. It thus becomes of paramount importance to address the issue of weight training in multilayer continuous feed forward networks using back-propagation. A novel pseudo-inverse based methodology for the weight training of the neural net is proposed in this paper and implemented. It is also found that the steepness of the thresholding function for the neurons in the net is a special contributing factor to the issue of convergence and the stability of the net. In order to study the effect of this, a Qlearning scheme for the “lambda training” is proposed and implemented in parallel with the net. The algorithm is tested quite extensively on a variety of problems like the XOR problem and the encoder/decoder problem and it is found on examination of the results that the algorithm does quite well in comparison to most standard algorithms
Keywords :
backpropagation; convergence; feedforward neural nets; multilayer perceptrons; Q learning; Qlearning; XOR problem; back-propagation; classification; encoder/decoder problem; lambda training; multilayer continuous feedforward neural nets; thresholding function steepness; weight training methodology; Convergence; Feedforward neural networks; Feedforward systems; Genetic algorithms; Jacobian matrices; Multi-layer neural network; Neural networks; Neurons; Stability; Testing;
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
DOI :
10.1109/ICNN.1996.548906