DocumentCode :
1859606
Title :
Context dependent viseme models for voice driven animation
Author :
Lei, Xie ; Dongmei, Jiang ; Ravyse, Ilse ; Verhelst, Werner ; Sahli, Hichem ; Slavova, Velina ; Rongchun, Zhao
Author_Institution :
Comput. Sci. & Eng., Northwestern Polytech. Univ., Xi´´an, China
Volume :
2
fYear :
2003
fDate :
2-5 July 2003
Firstpage :
649
Abstract :
This paper addresses the problem of animating a talking figure, such as an avatar, using speech input only. The system that was developed is based on hidden Markov models for the acoustic observation vectors of the speech sounds that correspond to each of 16 visually distinct mouth shapes (visemes). The acoustic variability with context was taken into account by building acoustic viseme models that are dependent on the left and right viseme contexts. Our experimental results show that it is indeed possible to obtain visually relevant speech segmentation data directly from the purely acoustic speech signal.
Keywords :
computer animation; hidden Markov models; speech processing; speech recognition; HMM; acoustic observation vector; acoustic speech signal; acoustic variability; audiovisual speech processing; avatar; context dependent viseme model; hidden Markov model; left viseme context; right viseme context; speech input; speech recognition; speech segmentation data; speech sound; talking figure animating problem; visually distinct mouth shape; voice driven animation; Animation; Automatic speech recognition; Avatars; Context modeling; Hidden Markov models; Mouth; Robustness; Shape; Speech processing; Speech recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Video/Image Processing and Multimedia Communications, 2003. 4th EURASIP Conference focused on
Print_ISBN :
953-184-054-7
Type :
conf
DOI :
10.1109/VIPMC.2003.1220537
Filename :
1220537
Link To Document :
بازگشت