• DocumentCode
    2400384
  • Title

    Training recurrent networks

  • Author

    Pedersen, Morten With

  • Author_Institution
    Dept. of Math. Modelling, Tech. Univ. Lyngby, Denmark
  • fYear
    1997
  • fDate
    24-26 Sep 1997
  • Firstpage
    355
  • Lastpage
    364
  • Abstract
    Training recurrent networks is generally believed to be a difficult task. Excessive training times and lack of convergence to an acceptable solution are frequently reported. In this paper we seek to explain the reason for this from a numerical point of view and show how to avoid problems when training. In particular we investigate ill-conditioning, the need for and effect of regularization and illustrate the superiority of second-order methods for training
  • Keywords
    learning (artificial intelligence); recurrent neural nets; ill-conditioning; recurrent neural network training; regularization; second-order methods; Computer networks; Iron; Laser feedback; Least squares methods; Mathematical model; Newton method; Output feedback; Recurrent neural networks; Recursive estimation; Stability;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop
  • Conference_Location
    Amelia Island, FL
  • ISSN
    1089-3555
  • Print_ISBN
    0-7803-4256-9
  • Type

    conf

  • DOI
    10.1109/NNSP.1997.622416
  • Filename
    622416