• DocumentCode
    295855
  • Title

    Cascade correlation: derivation of a more numerically stable update rule

  • Author

    John, George H.

  • Author_Institution
    Dept. of Comput. Sci., Stanford Univ., CA, USA
  • Volume
    2
  • fYear
    1995
  • fDate
    Nov/Dec 1995
  • Firstpage
    1126
  • Abstract
    Discusses the weight update rule in the cascade correlation neural net learning algorithm. The weight update rule implements gradient descent optimization of the correlation between a new hidden unit´s output and the previous network´s error. The author presents a derivation of the gradient of the correlation function and shows that his resulting weight update rule results in slightly faster training. The author also shows that the new rule is mathematically equivalent to the one presented in the original cascade correlation paper and discusses numerical issues underlying the difference in performance. Since a derivation of the cascade correlation weight update rule was not published, this paper should be useful to those who wish to understand the rule
  • Keywords
    learning (artificial intelligence); neural nets; cascade correlation neural net; gradient descent optimization; learning algorithm; numerically stable update rule; weight update rule; Computer science; Equations; Error correction; Least squares methods; Network topology; Neural networks; Newton method; Optimization methods; Robots; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1995. Proceedings., IEEE International Conference on
  • Conference_Location
    Perth, WA
  • Print_ISBN
    0-7803-2768-3
  • Type

    conf

  • DOI
    10.1109/ICNN.1995.487581
  • Filename
    487581