DocumentCode :
2017644
Title :
How to design a regularization term for improving generalization
Author :
Nakashima, Akiko ; Ogawa, Hidemitsu
Author_Institution :
Dept. of Comput. Sci., Tokyo Inst. of Technol., Japan
Volume :
1
fYear :
1999
fDate :
1999
Firstpage :
222
Abstract :
In supervised learning, the regularization method is often used for improving the level of generalization. We give a necessary and sufficient condition of an optimal regularization term, i.e., a regularization operator and parameter. The optimality is discussed based on the projection learning criterion in which the minimization of a generalization error is explicitly considered. We suggest how to design the optimal regularization term so as to satisfy the obtained condition
Keywords :
feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); multilayer perceptrons; generalization; generalization error minimization; optimal regularization term; projection learning; regularization operator; regularization parameter; supervised learning; three layer feedforward neural network; Additive noise; Bayesian methods; Computer science; Feedforward neural networks; Hilbert space; Inverse problems; Neural networks; Sampling methods; Supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Information Processing, 1999. Proceedings. ICONIP '99. 6th International Conference on
Conference_Location :
Perth, WA
Print_ISBN :
0-7803-5871-6
Type :
conf
DOI :
10.1109/ICONIP.1999.843990
Filename :
843990
Link To Document :
بازگشت