• DocumentCode
    13638
  • Title

    Learning Consistent Feature Representation for Cross-Modal Multimedia Retrieval

  • Author

    Cuicui Kang ; Shiming Xiang ; Shengcai Liao ; Changsheng Xu ; Chunhong Pan

  • Author_Institution
    Nat. Lab. of Pattern Recognition, Inst. of Autom., Beijing, China
  • Volume
    17
  • Issue
    3
  • fYear
    2015
  • fDate
    Mar-15
  • Firstpage
    370
  • Lastpage
    381
  • Abstract
    The cross-modal feature matching has gained much attention in recent years, which has many practical applications, such as the text-to-image retrieval. The most difficult problem of cross-modal matching is how to eliminate the heterogeneity between modalities. The existing methods (e.g., CCA and PLS) try to learn a common latent subspace, where the heterogeneity between two modalities is minimized so that cross-matching is possible. However, most of these methods require fully paired samples and suffer difficulties when dealing with unpaired data. Besides, utilizing the class label information has been found as a good way to reduce the semantic gap between the low-level image features and high-level document descriptions. Considering this, we propose a novel and effective supervised algorithm, which can also deal with the unpaired data. In the proposed formulation, the basis matrices of different modalities are jointly learned based on the training samples. Moreover, a local group-based priori is proposed in the formulation to make a better use of popular block based features (e.g., HOG and GIST). Extensive experiments are conducted on four public databases: Pascal VOC2007, LabelMe, Wikipedia, and NUS-WIDE. We also evaluated the proposed algorithm with unpaired data. By comparing with existing state-of-the-art algorithms, the results show that the proposed algorithm is more robust and achieves the best performance, which outperforms the second best algorithm by about 5% on both the Pascal VOC2007 and NUS-WIDE databases.
  • Keywords
    feature extraction; image matching; image representation; image retrieval; learning (artificial intelligence); LabelMe database; NUS-WIDE database; Pascal VOC2007 database; Wikipedia database; block based features; class label information; cross-modal feature matching; cross-modal multimedia retrieval; feature representation learning; high-level document description; latent subspace learning; local group-based priori; low-level image features; modality heterogeneity; supervised learning algorithm; text-to-image retrieval; Algorithm design and analysis; Correlation; Face recognition; Multimedia communication; Semantics; Training; Vectors; Cross-modal matching; documents and images; multimedia; retrieval;
  • fLanguage
    English
  • Journal_Title
    Multimedia, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1520-9210
  • Type

    jour

  • DOI
    10.1109/TMM.2015.2390499
  • Filename
    7006724