DocumentCode :
1797295
Title :
Recursive soft margin subspace learning
Author :
Ye, Q.L. ; Ye, Nan ; Zhao, C.X.
Author_Institution :
Coll. of Inf. Sci. & Technol., Nanjing Forestry Univ., Nanjing, China
fYear :
2014
fDate :
6-11 July 2014
Firstpage :
3511
Lastpage :
3518
Abstract :
In this paper, we propose a recursive soft margin (RSM) subspace learning framework for dimension reduction of high-dimensional data, which has strong recognition ability. RSM is motivated by the soft margin criterion of support vector machines (SVMs), which allows some training samples to be misclassified for a certain cost to achieve higher recognition results. Instead of maximizing the sum of squares of Euclidean interclass (called intracluster in unsupervised learning) pairwise distances over all the similar points in previous work, RSM seeks to maximize every pairwise interclass distance between two similar points, and this distance is represented in absolute. Then, we introduce a symmetrical Hingle loss function into the RSM framework. Doing so is to allow some pairwise interclass distances to violate the maximization constraint, such that we can get satisfactory classification performance by losing some training performance. To find multiple projection vectors, a recursive procedure is designed. Our framework is illustrated with Graph Embedding (GE). For any dimension reduction method expressible by the GE, it can thus be generalized by the proposed framework to boost their recognition power by reformulating the original problems.
Keywords :
graph theory; learning (artificial intelligence); pattern classification; support vector machines; Euclidean interclass pairwise distance; GE; RSM subspace learning; SVM; classification performance; dimension reduction; graph embedding; high-dimensional data; maximization constraint; recursive soft margin subspace learning; support vector machines; symmetrical Hingle loss function; training performance; unsupervised learning; Algorithm design and analysis; Databases; Manifolds; Optimization; Principal component analysis; Training; Vectors; QR decomposition; linear discriminant analysis; orthogonal linear discriminant analysis; orthogonal projection vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
Type :
conf
DOI :
10.1109/IJCNN.2014.6889389
Filename :
6889389
Link To Document :
بازگشت