Title :
Comparison of non-parametric methods for assessing classifier performance in terms of ROC parameters
Author :
Yousef, Waleed A. ; Wagner, Robert F. ; Loew, Murray H.
Author_Institution :
Dept. of Electr. & Comput. Eng., George Washington Univ., DC, USA
Abstract :
The most common metric to assess a classifier\´s performance is the classification error rate, or the probability of misclassification (PMC). Receiver operating characteristic (ROC) analysis is a more general way to measure the performance. Some metrics that summarize the ROC curve are the two normal-deviate-axes parameters, i.e., a and b, and the area under the curve (AUC). The parameters "a" and "b" represent the intercept and slope, respectively, for the ROC curve if plotted on normal-deviate-axes scale. AUC represents the average of the classifier TPF over FPF resulting from considering different threshold values. In the present work, we used Monte-Carlo simulations to compare different bootstrap-based estimators, e.g., leave-one-out, .632, and .632+ bootstraps, to estimate the AUC. The results show the comparable performance of the different estimators in terms of RMS, while the .632+ is the least biased.
Keywords :
Monte Carlo methods; nonparametric statistics; pattern classification; sensitivity analysis; statistical analysis; .632 bootstrap; .632+ bootstrap; Monte-Carlo simulations; bootstrap-based estimators; classification error rate; classifier performance assessment; leave-one-out bootstrap; misclassification probability; nonparametric methods; normal-deviate-axes parameters; receiver operating characteristic analysis; Biomedical imaging; Cost function; Error analysis; Hafnium; Laboratories; Performance analysis; Probability distribution; Statistical learning; Testing; Training data;
Conference_Titel :
Information Theory, 2004. ISIT 2004. Proceedings. International Symposium on
Print_ISBN :
0-7695-2250-5
DOI :
10.1109/AIPR.2004.18