• DocumentCode
    1749237
  • Title

    Input decay: simple and effective soft variable selection

  • Author

    Chapados, Nicolas ; Bengio, Yoshua

  • Author_Institution
    Dept. of Comput. Sci. & Oper. Res., Montreal Univ., Que., Canada
  • Volume
    2
  • fYear
    2001
  • fDate
    2001
  • Firstpage
    1233
  • Abstract
    To deal with the overfitting problems that occur when there are not enough examples compared to the number of input variables in supervised learning, traditional approaches are weight decay and greedy variable selection. An alternative that has recently started to attract attention is to keep all the variables but to put more emphasis on the “most useful” ones. We introduce a new regularization method called input decay that exerts more relative penalty on the parameters associated with the inputs that contribute less to the learned function. This method, like weight decay and variable selection, still requires to perform a kind of model selection. Successful comparative experiments with this new method were performed both on a simulated regression task and a real-world financial prediction task
  • Keywords
    financial data processing; learning (artificial intelligence); learning systems; financial prediction; input decay; model selection; overfitting; relative penalty; soft variable selection; supervised learning; Computational modeling; Computer networks; Computer science; Input variables; Linear regression; Machine learning algorithms; Neural networks; Operations research; Predictive models; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on
  • Conference_Location
    Washington, DC
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-7044-9
  • Type

    conf

  • DOI
    10.1109/IJCNN.2001.939537
  • Filename
    939537