Title :
A deep recurrent approach for acoustic-to-articulatory inversion
Author :
Peng Liu ; Quanjie Yu ; Zhiyong Wu ; Shiyin Kang ; Meng, Helen ; Lianhong Cai
Author_Institution :
Shenzhen Key Lab. of Inf. Sci. & Technol., Tsinghua Univ., Shenzhen, China
Abstract :
To solve the acoustic-to-articulatory inversion problem, this paper proposes a deep bidirectional long short term memory recurrent neural network and a deep recurrent mixture density network. The articulatory parameters of the current frame may have correlations with the acoustic features many frames before or after. The traditional pre-designed fixed-length context window may be either insufficient or redundant to cover such correlation information. The advantage of recurrent neural network is that it can learn proper context information on its own without the requirement of externally specifying a context window. Experimental results indicate that recurrent model can produce more accurate predictions for acoustic-to-articulatory inversion than deep neural network having fixed-length context window. Furthermore, the predicted articulatory trajectory curve of recurrent neural network is smooth. Average root mean square error of 0.816 mm on the MNGU0 test set is achieved without any post-filtering, which is state-of-the-art inversion accuracy.
Keywords :
recurrent neural nets; speech synthesis; MNGU0 test set; acoustic-to-articulatory inversion problem; deep bidirectional long short term memory recurrent neural network; deep recurrent mixture density network; pre-designed fixed-length context window; root mean square error; speech synthesis; Acoustics; Context; Correlation; Hidden Markov models; Recurrent neural networks; Speech; Trajectory; layer-wise pre-training; long short term memory (LSTM); mixture density network (MDN); recurrent nueral network (RNN);
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location :
South Brisbane, QLD
DOI :
10.1109/ICASSP.2015.7178812