Title :
Subgoal chaining and the local minimum problem
Author :
Lewis, Jonathan P. ; Weir, Michael K.
Author_Institution :
Dept. of Comput. Sci., St. Andrews Univ., UK
Abstract :
It is well known that performing gradient descent method on fixed surfaces may result in poor travel through getting stuck in local minima and other surface features. Subgoal chaining in supervised learning is a method to improve travel for neural networks by directing local variation in the surface during training. This paper shows that linear subgoal chains such as those used in ERA are not sufficient to overcome the local minimum problem and examines nonlinear subgoal chains as a possible alternative
Keywords :
approximation theory; convergence of numerical methods; feedforward neural nets; gradient methods; learning (artificial intelligence); optimisation; convergence; expanded range approximation; gradient descent method; local minima; multilayer neural networks; optimisation; subgoal chains; supervised learning; Computer science; Convergence; Feedforward neural networks; Least squares approximation; Neural networks; Robustness; State-space methods; Supervised learning; Terminology;
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Conference_Location :
Washington, DC
Print_ISBN :
0-7803-5529-6
DOI :
10.1109/IJCNN.1999.832660