DocumentCode
1991140
Title
Training techniques to obtain fault-tolerant neural networks
Author
Ching-Tai Chin ; Mehrotra, K. ; Mohan, C.K. ; Rankat, S.
Author_Institution
Sch. of Comput. & Inf. Sci., Syracuse Univ., NY, USA
fYear
1994
fDate
15-17 June 1994
Firstpage
360
Lastpage
369
Abstract
This paper addresses methods of improving the fault tolerance of feedforward neural nets. The first method is to coerce weights to have low magnitudes during the backpropagation training process, since fault tolerance is degraded by the use of high magnitude weights; at the same time, additional hidden nodes are added dynamically to the network to ensure that desired performance can be obtained. The second method is to add artificial faults to various components (nodes and links) of a network: during training. The third method is to repeatedly remove nodes that do not significantly affect the network: output, and then add new nodes that share the load of the more critical nodes in the network. Experimental results have shown that these methods can obtain better robustness than backpropagation training, and compare favorably with other approaches.<>
Keywords
backpropagation; fault tolerant computing; feedforward neural nets; learning (artificial intelligence); artificial faults; backpropagation training; fault-tolerant neural networks; feedforward neural nets; high magnitude weights; performance; training techniques; weights; Artificial neural networks; Backpropagation algorithms; Computer networks; Degradation; Fault tolerance; Feedforward neural networks; Information science; Neural network hardware; Neural networks; Robustness;
fLanguage
English
Publisher
ieee
Conference_Titel
Fault-Tolerant Computing, 1994. FTCS-24. Digest of Papers., Twenty-Fourth International Symposium on
Conference_Location
Austin, TX, USA
Print_ISBN
0-8186-5520-8
Type
conf
DOI
10.1109/FTCS.1994.315624
Filename
315624
Link To Document