• DocumentCode
    288330
  • Title

    A learning algorithm for multi-layer perceptrons with hard-limiting threshold units

  • Author

    Goodman, Rodney M. ; Zeng, Zheng

  • Author_Institution
    Dept. of Electr. Eng., California Inst. of Technol., Pasadena, CA, USA
  • Volume
    1
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    193
  • Abstract
    We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with “pseudo-gradient” descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed
  • Keywords
    backpropagation; formal logic; multilayer perceptrons; backpropagation; hard-limiting threshold units; hidden layer; learning algorithm; multilayer perceptrons; pseudo-gradient descent method; sigmoid function; Ear; Encoding; Equations; Hardware; Joining processes; Learning systems; Logic; Multilayer perceptrons;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374161
  • Filename
    374161