Title :
Identifying individuals in video by combining ´generative´ and discriminative head models
Author :
Everingham, Mark ; Zisserman, Andrew
Author_Institution :
Dept. of Eng. Sci., Oxford Univ.
Abstract :
The objective of this work is automatic detection and identification of individuals in unconstrained consumer video, given a minimal number of labelled faces as training data. Whilst much work has been done on (mainly frontal) face detection and recognition, current methods are not sufficiently robust to deal with the wide variations in pose and appearance found in such video. These include variations in scale, illumination, expression, partial occlusion, motion blur, etc. We describe two areas of innovation: the first is to capture the 3-D appearance of the entire head, rather than just the face region, so that visual features such as the hairline can be exploited. The second is to combine discriminative and ´generative´ approaches for detection and recognition. Images rendered using the head model are used to train a discriminative tree-structured classifier giving efficient detection and pose estimates over a very wide pose range with three degrees of freedom. Subsequent verification of the identity is obtained using the head model in a ´generative´ framework. We demonstrate excellent performance in detecting and identifying three characters and their poses in a TV situation comedy
Keywords :
face recognition; feature extraction; object detection; pattern classification; solid modelling; 3D appearance; TV situation comedy; automatic detection; automatic identification; degrees of freedom; discriminative head model; discriminative tree-structured classifier; face detection; face recognition; generative head model; labelled faces; unconstrained consumer video; visual feature; Classification tree analysis; Face detection; Face recognition; Head; Lighting; Rendering (computer graphics); Robustness; TV; Technological innovation; Training data;
Conference_Titel :
Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on
Conference_Location :
Beijing
Print_ISBN :
0-7695-2334-X
DOI :
10.1109/ICCV.2005.116