DocumentCode :
2486446
Title :
On the efficient minimization of convex surrogates in supervised learning
Author :
Nock, Richard ; Nielsen, Frank
Author_Institution :
CEREGMIA, U. Antilles-Guyane, Schoelcher
fYear :
2008
fDate :
8-11 Dec. 2008
Firstpage :
1
Lastpage :
4
Abstract :
Bartlett et al (2006) recently proved that a ground condition for convex surrogates, classification calibration, ties up the minimization of the surrogates and classification risks, and left as important open problems the algorithmic questions about the minimization of these surrogates. Our paper gives an answer for a wide subset of these surrogates that we call ldquobalanced surrogatesrdquo, a set with popular members (logistic loss, squared loss), that contains all surrogates meeting three important requirements about classification. We propose an algorithm that fits linear separators to the minimization of any such surrogate, with guaranteed convergence bounds under a so-called ldquoweak learning assumptionrdquo, a generalization of the one that grounds celebrated boosting algorithms. Experiments on more than 50 readily available domains of 10 flavors of the algorithm display the performances of new surrogates.
Keywords :
learning (artificial intelligence); minimisation; algorithmic questions; boosting algorithms; classification calibration; convex surrogates; supervised learning; surrogate minimization; weak learning assumption; Additives; Boosting; Calibration; Convergence; Displays; Logistics; Minimization methods; Particle separators; Risk management; Supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Pattern Recognition, 2008. ICPR 2008. 19th International Conference on
Conference_Location :
Tampa, FL
ISSN :
1051-4651
Print_ISBN :
978-1-4244-2174-9
Electronic_ISBN :
1051-4651
Type :
conf
DOI :
10.1109/ICPR.2008.4761667
Filename :
4761667
Link To Document :
بازگشت