Title :
An articulatory-formant speech synthesizer via a neural network
Author :
Sorokin, V.N. ; Miller, A.G.
Author_Institution :
Inst. for Inf. & Transmission Problems, Moscow, Russia
Abstract :
An articulatory-formant synthesizer oriented toward implementation by a neural network is proposed. Speech output is the sum of the formant filters´ responses terminated by an appropriate source of excitation. A model of articulatory dynamics is adopted for the synthesizer. A fast procedure for eigenfrequency calculation which makes it possible to reduce the computational cost in the synthesis to about three million operations per second of speech is presented. The synthesizer was simulated on a PC in FORTRAN. The model of the syntheses appears to accommodate a neural network implementation. A procedure for training the neural network is proposed
Keywords :
digital simulation; eigenvalues and eigenfunctions; microcomputer applications; neural nets; speech synthesis; FORTRAN; articulatory dynamics; articulatory-formant speech synthesizer; computational cost; eigenfrequency calculation; excitation source; filter response; microcomputer simulation; neural network; training; Acoustics; Computer networks; Equations; Network synthesis; Neural networks; Parallel processing; Shape; Speech coding; Speech synthesis; Synthesizers;
Conference_Titel :
Neuroinformatics and Neurocomputers, 1992., RNNS/IEEE Symposium on
Conference_Location :
Rostov-on-Don
Print_ISBN :
0-7803-0809-3
DOI :
10.1109/RNNS.1992.268614