DocumentCode :
1810013
Title :
Optimal use of regularization and cross-validation in neural network modeling
Author :
Chen, Dingding ; Hagan, Martin T.
Author_Institution :
Oklahoma State Univ., Stillwater, OK, USA
Volume :
2
fYear :
1999
fDate :
36342
Firstpage :
1275
Abstract :
This paper proposes a new framework for adapting regularization parameters in order to minimize validation error during the training of feedforward neural networks. A second derivative of validation error based regularization algorithm (SDVR) is derived using the Gauss-Newton approximation to the Hessian. The basic algorithm, which uses incremental updating, allows the regularization parameter α to be recalculated in each training epoch. Two variations of the algorithm, called convergent updating and conditional updating, enable α to be updated over a variable interval according to the specified control criteria. Simulations on a noise-corrupted parabolic function with two-inputs and a single output are investigated. The results demonstrate that the SDVR framework is very promising for adaptive regularization and can be cost effectively applied to a variety of different problems
Keywords :
Gaussian processes; Newton method; feedforward neural nets; learning (artificial intelligence); Gauss-Newton approximation; adaptive regularization; conditional updating; convergent updating; cross-validation; feedforward neural networks; incremental updating; learning; regularization algorithm; validation error; Approximation algorithms; Bayesian methods; Decision making; Feedforward neural networks; Intelligent networks; Least squares methods; Neural networks; Newton method; Recursive estimation; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-5529-6
Type :
conf
DOI :
10.1109/IJCNN.1999.831145
Filename :
831145
Link To Document :
بازگشت