DocumentCode :
1797932
Title :
MofN rule extraction from neural networks trained with augmented discretized input
Author :
Setiono, Rudy ; Azcarraga, Arnulfo ; Hayashi, Yasuhiro
Author_Institution :
Sch. of Comput., Nat. Univ. of Singapore, Singapore, Singapore
fYear :
2014
fDate :
6-11 July 2014
Firstpage :
1079
Lastpage :
1086
Abstract :
The accuracy of neural networks can be improved when they are trained with discretized continuous attributes as additional inputs. Such input augmentation makes it easier for the network weights to form more accurate decision boundaries when the data samples of different classes in the data set are contained in distinct hyper-rectangular subregions in the original input space. In this paper we present first how a neural network can be trained with augmented discretized inputs. The additional inputs are obtained by simply dividing the original interval of each continuous attribute into subintervals of equal length. Thermometer encoding scheme is used to represent these discretized inputs. The network is then pruned to remove most of the discretized inputs as well as the original continuous attributes as long as the network still achieves a minimum preset accuracy requirement. We then discuss how MofN rules can be extracted from the pruned network by analyzing the activations of the network´s hidden units and the weights of the network connections that remain in the pruned network. For data sets that have sample classes defined by relatively complex boundaries, surprisingly simple MofN rules with very good accuracy rates are obtained.
Keywords :
data mining; encoding; learning (artificial intelligence); network theory (graphs); neural nets; thermometers; MofN rule extraction; augmented discretized input; continuous attribute; continuous attributes; data samples; decision boundaries; discretized continuous attributes; hyper-rectangular subregions; input augmentation; network connections; network pruning; neural network training; thermometer encoding scheme; Accuracy; Data mining; Educational institutions; Input variables; Neural networks; Training; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
Type :
conf
DOI :
10.1109/IJCNN.2014.6889691
Filename :
6889691
Link To Document :
بازگشت