DocumentCode :
2709075
Title :
Start Globally, Optimize Locally, Predict Globally: Improving Performance on Imbalanced Data
Author :
Cieslak, David A. ; Chawla, Nitesh V.
Author_Institution :
Dept. of Comput. Sci. & Eng., Univ. of Notre Dame, Notre Dame, IN
fYear :
2008
fDate :
15-19 Dec. 2008
Firstpage :
143
Lastpage :
152
Abstract :
Class imbalance is a ubiquitous problem in supervised learning and has gained wide-scale attention in the literature. Perhaps the most prevalent solution is to apply sampling to training data in order improve classifier performance. The typical approach will apply uniform levels of sampling globally. However, we believe that data is typically multi-modal, which suggests sampling should be treated locally rather than globally. It is the purpose of this paper to propose a framework which first identifies meaningful regions of data and then proceeds to find optimal sampling levels within each. This paper demonstrates that a global classifier trained on data locally sampled produces superior rank-orderings on a wide range of real-world and artificial datasets as compared to contemporary global sampling methods.
Keywords :
data mining; learning (artificial intelligence); pattern classification; ubiquitous computing; class imbalance; supervised learning; training data; ubiquitous problem; Sampling methods; Supervised learning; Training data; Class imbalance; SMOTE; local sampling;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Data Mining, 2008. ICDM '08. Eighth IEEE International Conference on
Conference_Location :
Pisa
ISSN :
1550-4786
Print_ISBN :
978-0-7695-3502-9
Type :
conf
DOI :
10.1109/ICDM.2008.87
Filename :
4781109
Link To Document :
بازگشت