• DocumentCode
    2733467
  • Title

    An algorithm for in-the-loop training based on activation function derivative approximation

  • Author

    Yang, Jinming ; Ahmadi, M. ; Jullien, G.A. ; Miller, W.C.

  • Author_Institution
    Dept. of Electr. Eng., Windsor Univ., Ont., Canada
  • fYear
    1998
  • fDate
    9-12 Aug 1998
  • Firstpage
    556
  • Lastpage
    559
  • Abstract
    In this paper, we propose an algorithm for the in-the-loop training of a VLSI implementation of a neural network with analog neurons and programmable digital weights. The difficulty in evaluating the derivative of nonideal activation functions has been the main obstacle to effectively training a VLSI neural network chip via the standard backpropagation (BP) algorithm. In the paper approximated derivatives have been used in BP algorithm incorporating an adaptive learning rate. An analysis from the viewpoint of optimization shows the proposed algorithm is advantageous. Experimental results indicate that the algorithm is superior to weight perturbation-based algorithms
  • Keywords
    VLSI; backpropagation; mixed analogue-digital integrated circuits; neural chips; transfer functions; VLSI implementation; activation function derivative approximation; adaptive learning rate; analog neurons; backpropagation; in-the-loop training; neural network; programmable digital weights; Algorithm design and analysis; Approximation algorithms; Circuit synthesis; Convergence; Error correction; Neural networks; Neurons; Perturbation methods; Robustness; Very large scale integration;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Circuits and Systems, 1998. Proceedings. 1998 Midwest Symposium on
  • Conference_Location
    Notre Dame, IN
  • Print_ISBN
    0-8186-8914-5
  • Type

    conf

  • DOI
    10.1109/MWSCAS.1998.759553
  • Filename
    759553