• Title of article

    Commute time guided transformation for feature extraction

  • Author/Authors

    Deng، نويسنده , , Yue and Dai، نويسنده , , Qionghai and Wang، نويسنده , , Ruiping and Zhang، نويسنده , , Zengke Zhang، نويسنده ,

  • Issue Information
    روزنامه با شماره پیاپی سال 2012
  • Pages
    11
  • From page
    473
  • To page
    483
  • Abstract
    This paper presents a random-walk-based feature extraction method called commute time guided transformation (CTG) in the graph embedding framework. The paper contributes to the corresponding field in two aspects. First, it introduces the usage of a robust probability metric, i.e., the commute time (CT), to extract visual features for face recognition via a manifold way. Second, the paper designs the CTG optimization to find linear orthogonal projections that would implicitly preserve the commute time of high dimensional data in a low dimensional subspace. Compared with previous CT embedding algorithms, the proposed CTG is a graph-independent method. Existing CT embedding methods are graph-dependent that could only embed the data on the training graph in the subspace. Differently, CTG paradigm can be used to project the out-of-sample data into the same embedding space as the training graph. Moreover, CTG projections are robust to the graph topology that it can always achieve good recognition performance in spite of different initial graph structures. Owing to these positive properties, when applied to face recognition, the proposed CTG method outperforms other state-of-the-art algorithms on benchmark datasets. Specifically, it is much efficient and effective to recognize faces with noise.
  • Keywords
    Commute time , random walk , Manifold learning , Face recognition , feature extraction
  • Journal title
    Computer Vision and Image Understanding
  • Serial Year
    2012
  • Journal title
    Computer Vision and Image Understanding
  • Record number

    1696629