• DocumentCode
    2632216
  • Title

    Maximizing the Zero-Error Density for RTRL

  • Author

    Alexandre, Luís A.

  • Author_Institution
    Dept. of Inf., Univ. of Beira Interior, Covilha
  • fYear
    2008
  • fDate
    16-19 Dec. 2008
  • Firstpage
    80
  • Lastpage
    84
  • Abstract
    A new learning principle was introduced recently called the Zero-Error Density Maximization (Z-EDM) and was proposed in the framework of MLP backpropagation. In this paper we present the adaptation of this principle to online learning in recurrent neural networks, more precisely, to the Real Time Recurrent Learning (RTRL) approach. We show how to modify the RTRL learning algorithm in order to make it learn using Z-EDM criteria by using a sliding time window of previous error values. We present experiments showing that this new approach improves the convergence rate of the RNNs and improves the prediction performance in time series forecast.
  • Keywords
    convergence; learning (artificial intelligence); optimisation; recurrent neural nets; time series; MLP backpropagation; convergence rate; online learning; real time recurrent learning; recurrent neural networks; sliding time window; time series forecast; zero-error density maximization; Backpropagation algorithms; Convergence; Entropy; Error correction; Informatics; Kernel; Machine learning; Neural networks; Random variables; Recurrent neural networks;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Signal Processing and Information Technology, 2008. ISSPIT 2008. IEEE International Symposium on
  • Conference_Location
    Sarajevo
  • Print_ISBN
    978-1-4244-3554-8
  • Electronic_ISBN
    978-1-4244-3555-5
  • Type

    conf

  • DOI
    10.1109/ISSPIT.2008.4775679
  • Filename
    4775679