DocumentCode :
700101
Title :
How can acoustic-to-articulatory maps be constrained?
Author :
Laprie, Yves ; Maragos, Petros ; Schoentgen, Jean
Author_Institution :
LORIA, Nancy, France
fYear :
2008
fDate :
25-29 Aug. 2008
Firstpage :
1
Lastpage :
5
Abstract :
The objective of the presentation is to examine issues in constraining acoustic-to-articulatory maps by means of facial data and other a priori knowledge regarding speech production. Constraints that are considered are the insertion of data on lip opening, spread and protrusion, as well as other facial data together with constraints on the vocal tract length. A priori knowledge that has been taken into account concerns the deformation and speed of deformation of the vocal tract as well as phonetic rules regarding vowel-typical tract shapes. Inverse maps that have been tested are formant-to-area and formant-to-parametric sagittal profile maps as well as audio/visual-to-electromagnetic coil trajectory maps. The results obtained while mapping audio-only data compared to audio combined with other data are discussed.
Keywords :
acoustic signal processing; audio-visual systems; speech processing; acoustic-to-articulatory maps; audio-visual-to-electromagnetic coil trajectory map; formant-to-area sagittal profile map; formant-to-parametric sagittal profile map; inverse maps; phonetic rules; speech production; vocal tract deformation; vowel typical tract shape; Acoustics; Data models; Face; Hidden Markov models; Jacobian matrices; Shape; Speech;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signal Processing Conference, 2008 16th European
Conference_Location :
Lausanne
ISSN :
2219-5491
Type :
conf
Filename :
7080633
Link To Document :
بازگشت