• DocumentCode
    2260447
  • Title

    On derivation of MLP backpropagation from the Kelley-Bryson optimal-control gradient formula and its application

  • Author

    Mizutani, Eiji ; Dreyfus, Stuart E. ; Nishio, Kenichi

  • Author_Institution
    Dept. of Ind. Eng. & Oper. Res., California Univ., Berkeley, CA, USA
  • Volume
    2
  • fYear
    2000
  • fDate
    2000
  • Firstpage
    167
  • Abstract
    The well-known backpropagation (BP) derivative computation process for multilayer perceptrons (MLP) learning can be viewed as a simplified version of the Kelley-Bryson gradient formula in the classical discrete-time optimal control theory. We detail the derivation in the spirit of dynamic programming, showing how they can serve to implement more elaborate learning whereby teacher signals can be presented to any nodes at any hidden layers, as well as at the terminal output layer. We illustrate such an elaborate training scheme using a small-scale industrial problem as a concrete example, in which some hidden nodes are taught to produce specified target values. In this context, part of the hidden layer is no longer “hidden”
  • Keywords
    Backpropagation; Discrete time systems; Dynamic programming; Gradient methods; Multilayer perceptrons; Optimal control; BP derivative computation process; Kelley-Bryson optimal-control gradient formula; MLP backpropagation; discrete-time optimal control theory; dynamic programming; multilayer perceptron learning; Backpropagation; Cost function; Industrial training; Laboratories; Neurons; Nonhomogeneous media; Optimal control; Optimized production technology; Poles and towers; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on
  • Conference_Location
    Como
  • ISSN
    1098-7576
  • Print_ISBN
    0-7695-0619-4
  • Type

    conf

  • DOI
    10.1109/IJCNN.2000.857892
  • Filename
    857892