• DocumentCode
    854969
  • Title

    Parallel nonlinear optimization techniques for training neural networks

  • Author

    Phua, Paul K H ; Ming, Daohua

  • Author_Institution
    Dept. of Comput. Sci., Nat. Univ. of Singapore, Singapore
  • Volume
    14
  • Issue
    6
  • fYear
    2003
  • Firstpage
    1460
  • Lastpage
    1468
  • Abstract
    In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.
  • Keywords
    Newton method; backpropagation; convergence of numerical methods; feedforward neural nets; nonlinear systems; optimisation; parallel algorithms; backpropagation; convergence rate; neural networks training process; parallel nonlinear optimization technique; quasiNewton optimization technique; self-scaling quasiNewton methods; training algorithm; Backpropagation algorithms; Benchmark testing; Character generation; Convergence; Feedforward neural networks; Least squares methods; Neural networks; Neurons; Optimization methods; Parallel algorithms;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/TNN.2003.820670
  • Filename
    1257409