DocumentCode :
412845
Title :
3D model based expression tracking in intrinsic expression space
Author :
Wang, Qiang ; Ai, Haizhou ; Xu, Guangyou
Author_Institution :
Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China
fYear :
2004
fDate :
17-19 May 2004
Firstpage :
487
Lastpage :
492
Abstract :
A method of learning the intrinsic facial expression space for expression tracking is proposed. First, a partial 3D face model is constructed from a trinocular image and the expression space is parameterized using MPEG4 FAP. Then an algorithm of learning the intrinsic expression space from the parameterized FAP space is derived. The resulted intrinsic expression space reduces even to 5 dimensions. We will show that the obtained expression space is superior to the space obtained by PCA. Then the dynamical model is derived and trained on this intrinsic expression space. Finally, the learned tracker is developed in a particle-filter-style tracking framework. Experiments on both synthetic and real videos show that the learned tracker performs stably over a long sequence and the results are encouraging.
Keywords :
emotion recognition; face recognition; tracking; 3D model based expression tracking; MPEG4 FAP; intrinsic facial expression space; partial 3D face model; particle-filter-style tracking framework; trinocular image; Deformable models; Financial advantage program; Image analysis; Image motion analysis; Particle tracking; Solid modeling; Space technology; State-space methods; Target tracking; Videos;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automatic Face and Gesture Recognition, 2004. Proceedings. Sixth IEEE International Conference on
Print_ISBN :
0-7695-2122-3
Type :
conf
DOI :
10.1109/AFGR.2004.1301580
Filename :
1301580
Link To Document :
بازگشت