• DocumentCode
    3672093
  • Title

    Deep transfer metric learning

  • Author

    Junlin Hu;Jiwen Lu;Yap-Peng Tan

  • Author_Institution
    School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    325
  • Lastpage
    333
  • Abstract
    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn´t hold in many real visual recognition applications, especially when samples are captured across different datasets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML where the output of both the hidden layers and the top layer are optimized jointly. Experimental results on cross-dataset face verification and person re-identification validate the effectiveness of the proposed methods.
  • Keywords
    "Measurement","Training","Face","Learning systems","Visualization","Machine learning","Face recognition"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
  • Electronic_ISBN
    1063-6919
  • Type

    conf

  • DOI
    10.1109/CVPR.2015.7298629
  • Filename
    7298629