DocumentCode :
3400708
Title :
Using HMMs in audio-to-visual conversion
Author :
Rao, R. ; Mersereau, R. ; Chen, T.
Author_Institution :
Center for Signal & Image Process., Georgia Inst. of Technol., Atlanta, GA, USA
fYear :
1997
fDate :
23-25 Jun 1997
Firstpage :
19
Lastpage :
24
Abstract :
One emerging application which exploits the correlation between audio and video is speech driven facial animation. The goal of speech driven facial animation is to synthesize realistic video sequences from acoustic speech. Much of the previous research has implemented this audio to visual conversion strategy with existing techniques such as vector quantization and neural networks. We examine how this conversion process can be accomplished with hidden Markov models
Keywords :
audio-visual systems; computer animation; hidden Markov models; speech recognition; video signal processing; HMMs; acoustic speech; audio to visual conversion strategy; conversion process; hidden Markov models; neural networks; realistic video sequences; speech driven facial animation; vector quantization; Facial animation; Hidden Markov models; Image converters; Mouth; Multilayer perceptrons; Signal processing; Speech recognition; Speech synthesis; Streaming media; Video sequences;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia Signal Processing, 1997., IEEE First Workshop on
Conference_Location :
Princeton, NJ
Print_ISBN :
0-7803-3780-8
Type :
conf
DOI :
10.1109/MMSP.1997.602607
Filename :
602607
Link To Document :
بازگشت