DocumentCode :
2008612
Title :
Comparison of Evaluation Metrics in Classification Applications with Imbalanced Datasets
Author :
Fatourechi, Mehrdad ; Ward, Rabab K. ; Mason, Steven G. ; Huggins, Jane ; Schlogl, A. ; Birch, Gary E.
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of British Columbia, Vancouver, BC, Canada
fYear :
2008
fDate :
11-13 Dec. 2008
Firstpage :
777
Lastpage :
782
Abstract :
A new framework is proposed for comparing evaluation metrics in classification applications with imbalanced datasets (i.e., the probability of one class vastly exceeds others). For model selection as well as testing the performance of a classifier, this framework finds the most suitable evaluation metric amongst a number of metrics. We apply this framework to compare two metrics: overall accuracy and Kappa coefficient. Simulation results demonstrate that Kappa coefficient is more suitable.
Keywords :
pattern classification; Kappa coefficient; classification application; classifier testing; evaluation metrics; imbalanced datasets; model selection; Application software; Biomedical computing; Biomedical engineering; Brain computer interfaces; Computer interfaces; Cost function; Diseases; Machine learning; Physics computing; Testing; evaluation metrics; performance evaluation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning and Applications, 2008. ICMLA '08. Seventh International Conference on
Conference_Location :
San Diego, CA
Print_ISBN :
978-0-7695-3495-4
Type :
conf
DOI :
10.1109/ICMLA.2008.34
Filename :
4725065
Link To Document :
بازگشت