DocumentCode :
1294190
Title :
Scalable Large-Margin Mahalanobis Distance Metric Learning
Author :
Shen, Chunhua ; Kim, Junae ; Wang, Lei
Author_Institution :
NICTA, Canberra Res. Lab., Canberra, ACT, Australia
Volume :
21
Issue :
9
fYear :
2010
Firstpage :
1524
Lastpage :
1530
Abstract :
For many machine learning algorithms such as k-nearest neighbor ( k-NN) classifiers and k-means clustering, often their success heavily depends on the metric used to calculate distances between different data points. An effective solution for defining such a metric is to learn it from a set of labeled training samples. In this work, we propose a fast and scalable algorithm to learn a Mahalanobis distance metric. The Mahalanobis metric can be viewed as the Euclidean distance metric on the input data that have been linearly transformed. By employing the principle of margin maximization to achieve better generalization performances, this algorithm formulates the metric learning as a convex optimization problem and a positive semidefinite (p.s.d.) matrix is the unknown variable. Based on an important theorem that a p.s.d. trace-one matrix can always be represented as a convex combination of multiple rank-one matrices, our algorithm accommodates any differentiable loss function and solves the resulting optimization problem using a specialized gradient descent procedure. During the course of optimization, the proposed algorithm maintains the positive semidefiniteness of the matrix variable that is essential for a Mahalanobis metric. Compared with conventional methods like standard interior-point algorithms or the special solver used in large margin nearest neighbor , our algorithm is much more efficient and has a better performance in scalability. Experiments on benchmark data sets suggest that, compared with state-of-the-art metric learning algorithms, our algorithm can achieve a comparable classification accuracy with reduced computational complexity.
Keywords :
computational complexity; convex programming; gradient methods; learning (artificial intelligence); matrix algebra; pattern classification; pattern clustering; Euclidean distance metric; Mahalanobis distance metric learning; computational complexity; convex optimization problem; differentiable loss function; generalization performance; gradient descent procedure; k-means clustering; k-nearest neighbor classifiers; machine learning algorithm; margin maximization; positive semidefinite trace-one matrix; rank-one matrix; Australia; Clustering algorithms; Computational complexity; Concrete; Euclidean distance; Machine learning; Machine learning algorithms; Nearest neighbor searches; Object recognition; Scalability; Distance metric learning; Mahalanobis distance; large-margin nearest neighbor; semidefinite optimization; Algorithms; Artificial Intelligence; Computational Biology; Female; Humans; Neural Networks (Computer);
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2010.2052630
Filename :
5546978
Link To Document :
بازگشت