• DocumentCode
    2363383
  • Title

    Prior knowledge and the creation of “virtual” examples for RBF networks

  • Author

    Girosi, Federico ; Chan, Nicholas Tung

  • Author_Institution
    Artificial Intelligence Lab., MIT, Cambridge, MA, USA
  • fYear
    1995
  • fDate
    31 Aug-2 Sep 1995
  • Firstpage
    201
  • Lastpage
    210
  • Abstract
    Considers the problem of how to incorporate prior knowledge in supervised learning techniques. The authors set the problem in the framework of regularization theory, and consider the case in which one knows that the approximated function has radial symmetry. The problem can be solved in two alternative ways: 1) use the invariance as a constraint in the regularization theory framework to derive a rotation invariant version of radial basis functions; 2) use the radial symmetry to create new, “virtual” examples from a given data set. The authors show that these two apparently different methods of learning from “hints” (Abu-Mostafa, 1993) lead to exactly the same analytical solution
  • Keywords
    feedforward neural nets; function approximation; learning (artificial intelligence); minimisation; RBF networks; invariance; learning from hints; prior knowledge; radial basis functions; radial symmetry; regularization theory; supervised learning techniques; virtual examples; Algorithm design and analysis; Artificial intelligence; Biology computing; Computer networks; Constraint theory; Function approximation; Laboratories; Performance analysis; Radial basis function networks; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks for Signal Processing [1995] V. Proceedings of the 1995 IEEE Workshop
  • Conference_Location
    Cambridge, MA
  • Print_ISBN
    0-7803-2739-X
  • Type

    conf

  • DOI
    10.1109/NNSP.1995.514894
  • Filename
    514894