DocumentCode :
932817
Title :
Learning dynamic audio-visual mapping with input-output Hidden Markov models
Author :
Li, Yan ; Shum, Heung-Yeung
Volume :
8
Issue :
3
fYear :
2006
fDate :
6/1/2006 12:00:00 AM
Firstpage :
542
Lastpage :
549
Abstract :
In this paper, we formulate the problem of synthesizing facial animation from an input audio sequence as a dynamic audio-visual mapping. We propose that audio-visual mapping should be modeled with an input-output hidden Markov model, or IOHMM. An IOHMM is an HMM for which the output and transition probabilities are conditional on the input sequence. We train IOHMMs using the expectation-maximization(EM) algorithm with a novel architecture to explicitly model the relationship between transition probabilities and the input using neural networks. Given an input sequence, the output sequence is synthesized by the maximum likelihood estimation. Experimental results demonstrate that IOHMMs can generate natural and good-quality facial animation sequences from the input audio.
Keywords :
audio-visual systems; computer animation; expectation-maximisation algorithm; face recognition; hidden Markov models; multimedia systems; neural nets; speech processing; audio sequence; dynamic audio-visual mapping; expectation-maximization algorithm; facial animation sequence; input-output hidden Markov model; maximum likelihood estimation; neural network; Facial animation; Hidden Markov models; Maximum likelihood estimation; Network synthesis; Neural networks; Signal mapping; Signal processing; Signal synthesis; Speech recognition; Video sharing; Animation; HMM; IOHMM; audio-visual mapping; learning;
fLanguage :
English
Journal_Title :
Multimedia, IEEE Transactions on
Publisher :
ieee
ISSN :
1520-9210
Type :
jour
DOI :
10.1109/TMM.2006.870732
Filename :
1632039
Link To Document :
بازگشت