DocumentCode :
2174793
Title :
A subject-independent acoustic-to-articulatory inversion
Author :
Ghosh, Prasanta Kumar ; Narayanan, Shrikanth S.
Author_Institution :
Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA, USA
fYear :
2011
fDate :
22-27 May 2011
Firstpage :
4624
Lastpage :
4627
Abstract :
Acoustic-to-articulatory inversion is usually done in a subject-dependent manner, i.e., the inversion procedure may not work well if the parallel acoustic and articulatory training data is not available from the subjects in the test set. In this paper, we propose a subject-independent acoustic-to-articulatory inversion procedure; the proposed scheme requires acoustic-articulatory training data only from one subject and uses a generic acoustic model to perform acoustic-to-articulatory inversion for any arbitrary test subject. Experimental results on the MOCHA database show that the subject-independent inversion procedure can achieve an inversion accuracy close to the accuracy of the subject-dependent procedure especially for the lip aperture, tongue tip and tongue body articulatory trajectories. We also investigate various articulatory features to analyze the effectiveness of the proposed inversion procedure.
Keywords :
speech processing; MOCHA database; articulatory training data; lip aperture; parallel acoustic; subject-independent acoustic-to-articulatory inversion; tongue body articulatory trajectories; tongue tip trajectories; Accuracy; Acoustics; Speech; TV; Tongue; Training; Trajectory; acoustic-to-articulatory inversion; electromagnetic articulography; generalized smoothness criterion; tract variables;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
Conference_Location :
Prague
ISSN :
1520-6149
Print_ISBN :
978-1-4577-0538-0
Electronic_ISBN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2011.5947385
Filename :
5947385
Link To Document :
بازگشت