Title :
Speaker independent continuous voice to facial animation on mobile platforms
Author :
Feldhoffer, Gergely
Author_Institution :
Pazmany Peter Catholic Univ., Budapest
Abstract :
In this paper a speaker independent training method is presented for continuous voice to facial animation systems. An audiovisual database with multiple voices and only one speaker´s video information was created using dynamic time warping. The video information is aligned to more speakers´ voice. The fit is measured with subjective and objective tests. Suitability of implementations on mobile devices is discussed.
Keywords :
audio databases; computer animation; neural nets; speaker recognition; video coding; visual databases; MPEG-4; audiovisual database; continuous voice; dynamic time warping; facial animation systems; mobile platforms; neural network; speaker independent training method; video information; Audio databases; Data mining; Deafness; Facial animation; Feature extraction; Principal component analysis; Speech; Testing; Video compression; Video recording; DTW; MPEG-4; facial animation; neural network;
Conference_Titel :
ELMAR, 2007
Conference_Location :
Zadar
Print_ISBN :
978-953-7044-05-3
DOI :
10.1109/ELMAR.2007.4418820