DocumentCode :
303230
Title :
Improving generalization of a well trained network
Author :
Chakraborty, Goutam ; Noguchi, Shoichi
Author_Institution :
Aizu Univ., Fukushima, Japan
Volume :
1
fYear :
1996
fDate :
3-6 Jun 1996
Firstpage :
276
Abstract :
Feedforward neural networks trained with a small set of noisy samples are prone to overtraining and poor generalization. On the other hand, a very small network could not be trained at all because it would be biased by its own architecture. Thus, it is an old problem to ascertain that a well trained network would also deliver good generalization. Theoretical results give bounds on generalization error, but with worst case estimations which is of less practical use. In practice cross-validation is used to estimate generalization. We propose a method to construct network so as to ascertain good generalization, even after sufficient training. Simulations show very good results in support of our algorithm. Some theoretical aspects are discussed
Keywords :
feedforward neural nets; generalisation (artificial intelligence); feedforward neural network; generalization error bounds; well-trained network; Arthritis; Artificial neural networks; Feedforward neural networks; Feeds; Mean square error methods; Neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1996., IEEE International Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-3210-5
Type :
conf
DOI :
10.1109/ICNN.1996.548904
Filename :
548904
Link To Document :
بازگشت