Title :
Video-based face recognition via joint sparse representation
Author :
Yi-Chen Chen ; Patel, Vishal M. ; Shekhar, Shashi ; Chellappa, Rama ; Phillips, Jonathon
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Maryland, College Park, MD, USA
Abstract :
In video-based face recognition, a key challenge is in exploiting the extra information available in a video; e.g., face, body, and motion identity cues. In addition, different video sequences of the same subject may contain variations in resolution, illumination, pose, and facial expressions. These variations contribute to the challenges in designing an effective video-based face-recognition algorithm. We propose a novel multivariate sparse representation method for video-to-video face recognition. Our method simultaneously takes into account correlations as well as coupling information among the video frames. Our method jointly represents all the video data by a sparse linear combination of training data. In addition, we modify our model so that it is robust in the presence of noise and occlusion. Furthermore, we kernelize the algorithm to handle the non-linearities present in video data. Numerous experiments using unconstrained video sequences show that our method is effective and performs significantly better than many state-of-the-art video-based face recognition algorithms in the literature.
Keywords :
face recognition; image representation; image resolution; image sequences; pose estimation; video signal processing; facial expression; joint sparse representation; multivariate sparse representation method; nonlinearity handling; unconstrained video sequence; video data representation; video-to-video face recognition; Dictionaries; Face; Face recognition; Joints; Legged locomotion; Lighting; Video sequences;
Conference_Titel :
Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4673-5545-2
Electronic_ISBN :
978-1-4673-5544-5
DOI :
10.1109/FG.2013.6553787