• DocumentCode
    1474913
  • Title

    Improved Computation for Levenberg–Marquardt Training

  • Author

    Wilamowski, Bogdan M. ; Hao Yu

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Auburn Univ., Auburn, AL, USA
  • Volume
    21
  • Issue
    6
  • fYear
    2010
  • fDate
    6/1/2010 12:00:00 AM
  • Firstpage
    930
  • Lastpage
    937
  • Abstract
    The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.
  • Keywords
    Hessian matrices; gradient methods; learning (artificial intelligence); neural nets; Levenberg-Marquardt algorithm; Quasi-Hessian matrix; gradient vector; neural networks learning process; Levenberg–Marquardt (LM) algorithm; neural network training; Algorithms; Computer Simulation; Humans; Learning; Neural Networks (Computer); Pattern Recognition, Automated; Time Factors;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/TNN.2010.2045657
  • Filename
    5451114