DocumentCode :
3299087
Title :
Feature selection via set cover
Author :
Dash, M.
Author_Institution :
Dept. of Inf. Syst. & Comput. Sci., Nat. Univ. of Singapore, Singapore
fYear :
1997
fDate :
35738
Firstpage :
165
Lastpage :
171
Abstract :
In pattern classification, features are used to define classes. Feature selection is a preprocessing process that searches for an “optimal” subset of features. The class separability is normally used as the basic feature selection criterion. Instead of maximizing the class separability, as in the literature, this work adopts a criterion aiming to maintain the discriminating power of the data describing its classes. In other words, the problem is formalized as that of finding the smallest set of features that is “consistent” in describing classes. We describe a multivariate measure of feature consistency. The new feature selection algorithm is based on Johnson´s (1974) algorithm for set covering. Johnson´s analysis implies that this algorithm runs in polynomial time, and outputs a consistent feature set whose size is within a log factor of the best possible. Our experiments show that its performance in practice is much better than this, and that it outperforms earlier methods using a similar amount of time
Keywords :
computational complexity; feature extraction; pattern classification; search problems; set theory; algorithm performance; class separability; discriminating power; feature consistency; feature selection; multivariate measure; optimal feature subset searching; pattern classification; preprocessing; set covering; Binary search trees; Computer science; Data mining; Entropy; Error analysis; Error probability; Information systems; Pattern classification; Polynomials; Statistical analysis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Knowledge and Data Engineering Exchange Workshop, 1997. Proceedings
Conference_Location :
Newport Beach, CA
Print_ISBN :
0-8186-8230-2
Type :
conf
DOI :
10.1109/KDEX.1997.629862
Filename :
629862
Link To Document :
بازگشت