DocumentCode :
3126125
Title :
A method to extract articulatory parameters from the speech signal using neural networks
Author :
Branco, Antonio ; Tomé, Ana ; Teixeira, Antonio ; Vaz, Francisco
Author_Institution :
Dept. de Electron. e Telecoms, Aveiro Univ., Portugal
Volume :
2
fYear :
1997
fDate :
2-4 Jul 1997
Firstpage :
583
Abstract :
We present a method that uses artificial neural networks for acoustic to articulatory mapping. An assembly of Kohonen (1982) neural nets is used, in the first stage a network maps cepstral values, each neuron contains a subnet in a second stage that maps the articulatory space. The method allows both the acoustic to articulatory mapping, ensuring smooth varying vocal tract shapes, and the study of the nonuniqueness problem
Keywords :
acoustic signal processing; cepstral analysis; feature extraction; learning (artificial intelligence); linear predictive coding; self-organising feature maps; speech coding; speech synthesis; Kohonen neural nets; LPC derived cepstral parameters; acoustic to articulatory mapping; articulatory parameters extraction; articulatory space; artificial neural networks; cepstral values; neural networks training; nonuniqueness problem; smooth varying vocal tract shapes; speech signal; speech synthesis; Assembly; Cepstral analysis; Linear predictive coding; Lips; Neural networks; Neurons; Shape; Signal mapping; Speech; Tongue;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Digital Signal Processing Proceedings, 1997. DSP 97., 1997 13th International Conference on
Conference_Location :
Santorini
Print_ISBN :
0-7803-4137-6
Type :
conf
DOI :
10.1109/ICDSP.1997.628417
Filename :
628417
Link To Document :
بازگشت