• DocumentCode
    2866092
  • Title

    Neural network architectures and learning

  • Author

    Wilamowski, Bogdan M.

  • Author_Institution
    Auburn Univ., AL, USA
  • Volume
    1
  • fYear
    2003
  • fDate
    10-12 Dec. 2003
  • Abstract
    Various learning methods of neural networks including supervised and unsupervised methods are presented and illustrated with examples. General learning rule as a function of the incoming signals is discussed. Other learning rules such as Hebbian learning, perceptron learning, LMS (least mean square) learning, delta learning, WTA (winner take all) learning, and PCA (principal component analysis) are presented as a derivation of the general learning rule. Architecture specific learning algorithms for cascade correlation networks, Sarajedini and Hecht-Nielsen networks, functional link networks, polynomial networks, counterpropagation networks, RBF (radial basis function) networks are described. Dedicated learning algorithms for on chip neural network training are also evaluated. The tutorial focuses on various practical methods such as Quickprop, RPROP, Back Percolation, Delta-bar-Delta and others. Main reasons of convergence difficulties such as local minima or flat spot problems are analyzed. More advance gradient-based methods including pseudo inversion learning, conjugate gradient, Newton and LM (Levenberg-Marquardt) algorithm are illustrated with examples.
  • Keywords
    Hebbian learning; conjugate gradient methods; least mean squares methods; neural net architecture; perceptrons; principal component analysis; radial basis function networks; unsupervised learning; Hebbian learning; Hecht-Nielsen networks; Levenberg-Marquardt algorithm; Sarajedini networks; cascade correlation networks; chip neural network training; conjugate gradient methods; counterpropagation networks; dedicated learning algorithms; delta learning; functional link networks; least mean square learning; perceptron learning; polynomial networks; principal component analysis; pseudo inversion learning; radial basis function networks; unsupervised learning methods; winner take all learning; Computer networks; Hebbian theory; Input variables; Multi-stage noise shaping; Multidimensional systems; Neural networks; Neurons; Shape; Signal processing; Spirals;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Industrial Technology, 2003 IEEE International Conference on
  • Print_ISBN
    0-7803-7852-0
  • Type

    conf

  • DOI
    10.1109/ICIT.2003.1290197
  • Filename
    1290197