DocumentCode :
1579202
Title :
An equivalence between sigmoidal gain scaling and training with noisy (jittered) input data
Author :
Reed, Russell ; Marks, Robert J., II ; Oh, Seho
Author_Institution :
Dept. of Electr. Eng., Washington Univ., Seattle, WA, USA
fYear :
1992
Firstpage :
120
Abstract :
Training with additive input noise (jitter) in a commonly used heuristic for improving generalization in layered perceptron artificial neural networks. A drawback of training with jitter, in comparison with the unjittered case, is that many more sample presentations are required in order to average over the noise and estimate the expected response. The authors demonstrate that the expected effect of jitter can be computed, in certain cases, by a simple scaling of the sigmoid nonlinearities. This means that the benefits of training with noise can be obtained without the computational cost of averaging over many noisy samples. These results provide justification for gain scaling as a heuristic for improving generalization. Application of this technique to a single-hidden-layer perceptron with linear output is considered
Keywords :
feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); additive input noise; generalization; heuristic; jitter; layered perceptron artificial neural networks; linear output; sample presentations; sigmoid nonlinearities; sigmoidal gain scaling; single-hidden-layer perceptron; training; Additive noise; Computer networks; Convolution; Distribution functions; Gaussian noise; Jitter; Neural networks; Noise shaping; Shape; Smoothing methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neuroinformatics and Neurocomputers, 1992., RNNS/IEEE Symposium on
Conference_Location :
Rostov-on-Don
Print_ISBN :
0-7803-0809-3
Type :
conf
DOI :
10.1109/RNNS.1992.268603
Filename :
268603
Link To Document :
بازگشت