DocumentCode :
3079770
Title :
Aggregating performance metrics for classifier evaluation
Author :
Seliya, Naeem ; Khoshgoftaar, Taghi M. ; Van Hulse, Jason
Author_Institution :
Comput. & Inf. Sci., Univ. of Michigan - Dearborn, Dearborn, MI, USA
fYear :
2009
fDate :
10-12 Aug. 2009
Firstpage :
35
Lastpage :
40
Abstract :
There are several performance metrics that have been proposed for evaluating a classification model, e.g., accuracy, error rates, precision, recall, etc. While it is known that evaluating a classifier on only one performance metric is not advisable, the use of multiple performance metrics poses unique comparative challenges for the analyst. Since different performance metrics provide different perspectives into the classifier performance space, it is common for a learner to be relatively better on one performance metric and not better on another performance metric. We present a novel approach to aggregating several individual performance metrics into one metric, called the relative performance metric (RPM). A large case study consisting of 35 real-world classification datasets, 12 classification algorithms, and 10 commonly used performance metrics illustrates the practical appeal of RPM. The empirical results clearly demonstrate the benefits of using RPM when classifier evaluation requires the consideration of a large number of individual performance metrics.
Keywords :
classification; classification algorithms; classification datasets; classifier evaluation; relative performance metric; Application software; Classification algorithms; Computer science; Error analysis; Image analysis; Information science; Measurement; Medical diagnosis; Performance analysis; Satellites;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Information Reuse & Integration, 2009. IRI '09. IEEE International Conference on
Conference_Location :
Las Vegas, NV
Print_ISBN :
978-1-4244-4114-3
Electronic_ISBN :
978-1-4244-4116-7
Type :
conf
DOI :
10.1109/IRI.2009.5211611
Filename :
5211611
Link To Document :
بازگشت