• DocumentCode
    2707673
  • Title

    Improving rule extraction from neural networks by modifying hidden layer representations

  • Author

    Huynh, Thuan Q. ; Reggia, James A.

  • Author_Institution
    Dept. of Comput. Sci., Univ. of Maryland, College Park, MD, USA
  • fYear
    2009
  • fDate
    14-19 June 2009
  • Firstpage
    1316
  • Lastpage
    1321
  • Abstract
    This paper describes a new method for extracting symbolic rules from multilayer feedforward neural networks. Our approach is to encourage backpropagation to learn a sparser representation at the hidden layer and to use the improved representation to extract fewer, easier to understand rules. A new error term defined over the hidden layer is added to the standard sum of squared error so that the total squared distance between hidden activation vectors is increased. We show that this method helps extract fewer rules without decreasing classification accuracy in four publicly available data sets.
  • Keywords
    backpropagation; multilayer perceptrons; pattern classification; vectors; data set classification; hidden activation vector; hidden layer representation; multilayer feedforward neural network; sparser representation learning; sum-of-squared error; symbolic rule extraction; Backpropagation algorithms; Computer science; Data mining; Encoding; Feedforward neural networks; Humans; Matrix decomposition; Multi-layer neural network; Neural networks; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2009. IJCNN 2009. International Joint Conference on
  • Conference_Location
    Atlanta, GA
  • ISSN
    1098-7576
  • Print_ISBN
    978-1-4244-3548-7
  • Electronic_ISBN
    1098-7576
  • Type

    conf

  • DOI
    10.1109/IJCNN.2009.5178685
  • Filename
    5178685