• DocumentCode
    1817055
  • Title

    Fault tolerance training improves generalization and robustness

  • Author

    Clay, Reed D. ; Séquin, Carlo H.

  • Author_Institution
    Div. of Comput. Sci., California Univ., Berkeley, CA, USA
  • Volume
    1
  • fYear
    1992
  • fDate
    7-11 Jun 1992
  • Firstpage
    769
  • Abstract
    A recurrent theme in the neural network literature is that noise is good. Other researchers have presented experimental evidence of improvements due to adding noise to the input data, randomly presenting data rather than cycling through it, truncating bits of the weights, using ad hoc modifications of the error signal, stochastic updating, and others. Another source of noise, one that also forces the network to develop a more robust internal representation, is proposed. During training, one randomly introduces the types of failures that one might expect to occur during operation. It is shown how this leads to significant improvements in the network´s ability to avoid the overfitting problem, generalize to new data, and cope with internal failures
  • Keywords
    fault tolerant computing; learning (artificial intelligence); neural nets; error signal; generalization; neural network; overfitting problem; robustness; stochastic updating; Character recognition; Computer networks; Computer science; Fault tolerance; Neural networks; Noise robustness; Recurrent neural networks; Shape; Stochastic resonance; Testing;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1992. IJCNN., International Joint Conference on
  • Conference_Location
    Baltimore, MD
  • Print_ISBN
    0-7803-0559-0
  • Type

    conf

  • DOI
    10.1109/IJCNN.1992.287094
  • Filename
    287094