• DocumentCode
    1123052
  • Title

    Weight shifting techniques for self-recovery neural networks

  • Author

    Khunasaraphan, C. ; Vanapipat, K. ; Lursinsap, C.

  • Author_Institution
    Center for Adv. Comput. Studies, Southwestern Louisiana Univ., Lafayette, LA, USA
  • Volume
    5
  • Issue
    4
  • fYear
    1994
  • fDate
    7/1/1994 12:00:00 AM
  • Firstpage
    651
  • Lastpage
    658
  • Abstract
    In this paper, a self-recovery technique of feedforward neural networks called weight shifting and its analytical models are proposed. The technique is applied to recover a network when some faulty links and/or neurons occur during the operation. If some input links of a specific neuron are detected faulty, their weights will be shifted to healthy links of the same neuron. On the other hand, if a faulty neuron is encountered, then we can treat it as a special case of faulty links by considering all the output links of that neuron to be faulty. The aim of this technique is to recover the network in a short time without any retraining and hardware repair. We also propose the hardware architecture for implementing this technique
  • Keywords
    built-in self test; feedforward neural nets; neural chips; faulty links; feedforward neural networks; self-recovery neural networks; weight shifting techniques; Analytical models; Computer networks; Fault detection; Fault tolerance; Feedforward neural networks; Helium; Neural network hardware; Neural networks; Neurons; Very large scale integration;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/72.298234
  • Filename
    298234