• DocumentCode
    2496591
  • Title

    Comparison of MLP cost functions to dodge mislabeled training data

  • Author

    Nieminen, Paavo ; Karkkainen, Tommi

  • Author_Institution
    Dept. of Math. Inf. Technol., Univ. of Jyvaskyla, Jyvaskyla, Finland
  • fYear
    2010
  • fDate
    18-23 July 2010
  • Firstpage
    1
  • Lastpage
    7
  • Abstract
    Multilayer perceptrons (MLP) are often trained by minimizing the mean of squared errors (MSE), which is a sum of squared Euclidean norms of error vectors. Less common is to minimize the sum of Euclidean norms without squaring them. The latter approach, mean of non-squared errors (ME), bears implications from robust statistics. We carried out computational experiments to see if it would be notably better to train an MLP classifier by minimizing ME instead of MSE in the special case when training data contains class noise, i.e., when there is some mislabeling. Based on our experiments, we conclude that for small datasets containing class noise, ME could indeed be a very preferable choice, whereas for larger datasets it may not help.
  • Keywords
    mean square error methods; multilayer perceptrons; mean squared errors; mislabeled training data; multilayer perceptrons; squared Euclidean norms; Iris;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks (IJCNN), The 2010 International Joint Conference on
  • Conference_Location
    Barcelona
  • ISSN
    1098-7576
  • Print_ISBN
    978-1-4244-6916-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.2010.5596865
  • Filename
    5596865