DocumentCode :
1843148
Title :
Subgoal chaining and the local minimum problem
Author :
Lewis, Jonathan P. ; Weir, Michael K.
Author_Institution :
Dept. of Comput. Sci., St. Andrews Univ., UK
Volume :
3
fYear :
1999
fDate :
1999
Firstpage :
1844
Abstract :
It is well known that performing gradient descent method on fixed surfaces may result in poor travel through getting stuck in local minima and other surface features. Subgoal chaining in supervised learning is a method to improve travel for neural networks by directing local variation in the surface during training. This paper shows that linear subgoal chains such as those used in ERA are not sufficient to overcome the local minimum problem and examines nonlinear subgoal chains as a possible alternative
Keywords :
approximation theory; convergence of numerical methods; feedforward neural nets; gradient methods; learning (artificial intelligence); optimisation; convergence; expanded range approximation; gradient descent method; local minima; multilayer neural networks; optimisation; subgoal chains; supervised learning; Computer science; Convergence; Feedforward neural networks; Least squares approximation; Neural networks; Robustness; State-space methods; Supervised learning; Terminology;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-5529-6
Type :
conf
DOI :
10.1109/IJCNN.1999.832660
Filename :
832660
Link To Document :
بازگشت