• DocumentCode
    1681761
  • Title

    Extended theory refinement in knowledge-based neural networks

  • Author

    Garcez, Artur S d Avila

  • Author_Institution
    Dept. of Comput., Imperial Coll. of Sci., Technol. & Med., London, UK
  • Volume
    3
  • fYear
    2002
  • fDate
    6/24/1905 12:00:00 AM
  • Firstpage
    2905
  • Lastpage
    2910
  • Abstract
    This paper shows that single hidden layer networks with semi-linear activation function compute the answer set semantics of extended logic programs. As a result, incomplete (nonmonotonic) theories, presented as extended logic programs, i.e., possibly containing both classical and default negations, may be refined through inductive learning in knowledge-based neural networks
  • Keywords
    feedforward neural nets; knowledge based systems; learning (artificial intelligence); logic programming; transfer functions; default negation; extended logic programming; feedforward neural networks; hybrid architectures; inductive learning; knowledge-based neural networks; semilinear activation function; single hidden layer networks; translation algorithm; Artificial neural networks; Computer networks; Concurrent computing; Educational institutions; Hybrid power systems; Intelligent networks; Logic programming; Machine learning; Neural networks; Refining;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2002. IJCNN '02. Proceedings of the 2002 International Joint Conference on
  • Conference_Location
    Honolulu, HI
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-7278-6
  • Type

    conf

  • DOI
    10.1109/IJCNN.2002.1007610
  • Filename
    1007610