Title :
An Articulatory Approach to Video-Realistic Mouth Animation
Author :
Xie, Lei ; Liu, Zhi-Qiang
Author_Institution :
Sch. of Creative Media, City Univ. of Hong Kong
Abstract :
We propose an articulatory approach which is capable of converting speaker independent continuous speech into video-realistic mouth animation. We directly model the motions of articulators, such as lips, tongue, and teeth, using a dynamic Bayesian network (DBN)-structured articulatory model (AM). We also present an EM-based conversion algorithm to convert audio to animation parameters by maximizing the likelihood of these parameters given the input audio and the AMs. We further extend the AMs with introduction of speech context information, resulting in context dependent articulatory models (CD-AMs). Objective evaluations on the JEWEL testing set show that the animation parameters estimated by the proposed AMs and CD-AMs can follow the real parameters more accurately than that of phoneme-based models (PMs) and their context dependent counterparts (CD-PMs). Subjective evaluations on an AV subjective testing set, which collects various AV contents from the Internet, also demonstrate that the AMs and CD-AMs are able to generate more natural and realistic mouth animations and the CD-AMs achieve the best performance
Keywords :
belief networks; computer animation; expectation-maximisation algorithm; speech processing; articulatory model; context dependent articulatory models; dynamic Bayesian network; phoneme-based models; speaker independent continuous speech; speech context information; video-realistic mouth animation; Animation; Bayesian methods; Context modeling; Lips; Mouth; Parameter estimation; Speech; Teeth; Testing; Tongue;
Conference_Titel :
Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on
Conference_Location :
Toulouse
Print_ISBN :
1-4244-0469-X
DOI :
10.1109/ICASSP.2006.1660090