• DocumentCode
    3707967
  • Title

    Cross-modality pose-invariant facial expression

  • Author

    Jordan Hashemi;Qiang Qiu;Guillermo Sapiro

  • Author_Institution
    Department of Electrical and Computer Engineering, Duke University, USA
  • fYear
    2015
  • Firstpage
    4007
  • Lastpage
    4011
  • Abstract
    In this work, we present a dictionary learning based framework for robust, cross-modality, and pose-invariant facial expression recognition. The proposed framework first learns a dictionary that i) contains both 3D shape and morphological information as well as 2D texture and geometric information, ii) enforces coherence across both 2D and 3D modalities and different poses, and iii) is robust in the sense that a learned dictionary can be applied across multiple facial expression datasets. We demonstrate that enforcing domain specific block structures on the dictionary, given a test expression sample, we can transform such sample across different domains for tasks such as pose alignment. We validate our approach on the task of pose-invariant facial expression recognition on the standard BU3D-FE and MultiPie datasets, achieving state of the art performance.
  • Keywords
    "Dictionaries","Three-dimensional displays","Face","Feature extraction","Shape","Robustness"
  • Publisher
    ieee
  • Conference_Titel
    Image Processing (ICIP), 2015 IEEE International Conference on
  • Type

    conf

  • DOI
    10.1109/ICIP.2015.7351558
  • Filename
    7351558