DocumentCode :
303203
Title :
Ensemble pruning algorithms for accelerated training
Author :
Mukherjee, Sayandev ; Fine, Terrence L.
Author_Institution :
Sch. of Electr. Eng., Cornell Univ., Ithaca, NY, USA
Volume :
1
fYear :
1996
fDate :
3-6 Jun 1996
Firstpage :
96
Abstract :
The error surface on which minimization is done in any feedforward neural network training algorithm is highly irregular, with multiple local minima having been observed empirically. In training schemes, this implies that several random initial points must be chosen, and the performance of the resulting trained neural network evaluated for each such choice, in order to obtain a well-trained network. However, training is computationally expensive, and often one may have a limit on the number of training cycles allowed during training, thereby making the total number of cycles required to find the best-trained net too large for this brute-force method to be practical. It is therefore desirable to find an algorithm which eliminates “bad” networks during training itself, without utilizing the full allowed number of training cycles, and in such a way as to minimize the average total training cycles. We present two such algorithms which are easy to implement
Keywords :
approximation theory; error statistics; feedforward neural nets; learning (artificial intelligence); optimisation; Levenberg-Marquardt approximation; accelerated training; ensemble pruning; error gradient method; error statistics; feedforward neural network; learning; training errors; Acceleration; Computer architecture; Data mining; Feature extraction; Feedforward neural networks; Iterative algorithms; Minimization methods; Neural networks; Testing; Upper bound;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
Type :
conf
DOI :
10.1109/ICNN.1996.548873
Filename :
548873
Link To Document :
بازگشت