Title :
Face animation based on observed 3D speech dynamics
Author :
Kalberer, Gregor A. ; Van Gool, Luc
Author_Institution :
Comput. Vision Group, Eidgenossische Tech. Hochschule, Zurich, Switzerland
Abstract :
Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture
Keywords :
biology computing; biomechanics; computer animation; dynamics; principal component analysis; speech; 3D reconstruction; 3D shape statistics learning; 3D speech dynamics; basic mode extraction; bootstrap procedure; face animation; face dynamics tracking; facial anatomy; markers; mask shape-space dimension; new sentences; performance capture; principal component analysis; talking face; temporal intervals; topological mask; Anatomy; Computational modeling; Computer vision; Face detection; Facial animation; Humans; Mouth; Principal component analysis; Shape; Speech;
Conference_Titel :
Computer Animation, 2001. The Fourteenth Conference on Computer Animation. Proceedings
Conference_Location :
Seoul
Print_ISBN :
0-7803-7237-9
DOI :
10.1109/CA.2001.982373