• DocumentCode
    2875433
  • Title

    Adaptation Learning Rate Algorithm of Feed-Forward Neural Networks

  • Author

    Chao Yang ; Ruzhi Xu

  • Author_Institution
    Dept. of Comput. Inf. Eng., Shandong Univ. of Finance, Ji Nan, China
  • fYear
    2009
  • fDate
    19-20 Dec. 2009
  • Firstpage
    1
  • Lastpage
    3
  • Abstract
    BP algorithm solves how to change hidden neurons weights of multilayer feed-forward neural networks, it uses mean square error criterion as the cost function, which takes gradient descent method to optimize the cost function to get the minimum and propagate the error signals to tune the weights. The gradient descent method uses fixed learning rate which denotes the weights changing extent. If the learning rate is larger, the learning speed is faster, but it may induce the oscillating, in contrast, if the learning rate is smaller, the learning process is more stable, but learning speed is slower. In this paper, we propose a new adaptation learning rate algorithm, it decreases the learning rate as the error value decreases, which can accelerate the learning speed in case of the steady leaning process, and the experiment results show the improved algorithm is very effective.
  • Keywords
    backpropagation; feedforward neural nets; mean square error methods; BP algorithm; adaptation learning rate algorithm; gradient descent method; mean square error criterion; multilayer feedforward neural networks; Acceleration; Convergence; Cost function; Feedforward neural networks; Feedforward systems; Mean square error methods; Multi-layer neural network; Neural networks; Neurons; Signal processing algorithms;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Information Engineering and Computer Science, 2009. ICIECS 2009. International Conference on
  • Conference_Location
    Wuhan
  • Print_ISBN
    978-1-4244-4994-1
  • Type

    conf

  • DOI
    10.1109/ICIECS.2009.5366919
  • Filename
    5366919