Title :
A robust learning algorithm
Author :
White, M.W. ; Harper, J.S.
Author_Institution :
Dept. of Electr. & Comput. Eng., North Carolina State Univ., Raleigh, NC
Abstract :
Summary form only given, as follows. A strategy for improved generalization when the training set is small and noisy is considered. A modified back-propagation algorithm sequentially searches a hypothesis space in a highly controlled manner. The search begins with robust linear and near-linear models and then gradually expands to more complex nonlinear models. The algorithm constrains the search process so that only models that are relatively well supported by the evidence (i.e., training examples) are tested. In contrast, standard back-propagation does not control the search process as accurately. Although the algorithm has been tested only on small problems, its performance is clearly superior to that of the back-propagation algorithm. The proposed strategy can also be applied to other learning algorithms
Keywords :
learning systems; neural nets; search problems; back-propagation algorithm; generalization; hypothesis space; near-linear models; nonlinear models; robust learning algorithm; sequential searching; Backpropagation algorithms; Robustness; Testing;
Conference_Titel :
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location :
Seattle, WA
Print_ISBN :
0-7803-0164-1
DOI :
10.1109/IJCNN.1991.155613