Title of article :
Large Margin Subspace Learning for feature selection
Author/Authors :
Liu، نويسنده , , Bo and Fang، نويسنده , , Bin and Liu، نويسنده , , Xinwang and Chen، نويسنده , , Jie and Huang، نويسنده , , Zhenghong and He، نويسنده , , Xiping، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2013
Abstract :
Recent research has shown the benefits of large margin framework for feature selection. In this paper, we propose a novel feature selection algorithm, termed as Large Margin Subspace Learning (LMSL), which seeks a projection matrix to maximize the margin of a given sample, defined as the distance between the nearest missing (the nearest neighbor with the different label) and the nearest hit (the nearest neighbor with the same label) of the given sample. Instead of calculating the nearest neighbor of the given sample directly, we treat each sample with different (same) labels with the given sample as a potential nearest missing (hint), with the probability estimated by kernel density estimation. By this way, the nearest missing (hint) is calculated as an expectation of all different (same) class samples. In order to perform feature selection, an ℓ 2 , 1 - norm is imposed on the projection matrix to enforce row-sparsity. An efficient algorithm is then proposed to solve the resultant optimization problem. Comprehensive experiments are conducted to compare the performance of the proposed algorithm with the other five state-of-the-art algorithms RFS, SPFS, mRMR, TR and LLFS, it achieves better performance than the former four. Compared with the algorithm LLFS, the proposed algorithm has a competitive performance with however a significantly faster computational.
Keywords :
Subspace learning , feature selection , ? 2 , 1 - norm regularization , Large margin maximization
Journal title :
PATTERN RECOGNITION
Journal title :
PATTERN RECOGNITION