Title :
Integrated automatic expression prediction and speech synthesis from text
Author :
Langzhou Chen ; Gales, Mark J.F. ; Braunschweiler, Norbert ; Akamine, Masami ; Knill, Kate
Author_Institution :
Cambridge Res. Lab., Toshiba Res. Eur. Ltd., Cambridge, UK
Abstract :
Getting a text to speech synthesis (TTS) system to speak lively animated stories like a human is very difficult. To generate expressive speech, the system can be divided into 2 parts: predicting expressive information from text; and synthesizing the speech with a particular expression. Traditionally these blocks have been studied separately. This paper proposes an integrated approach, sharing the expressive synthesis space and training data across the two expressive components. There are several advantages to this approach, including a simplified expression labelling process, support of a continuous expressive synthesis space, and joint training of the expression predictor and speech synthesiser to maximise the likelihood of the TTS system given the training data. Synthesis experiments indicated that the proposed approach generated far more expressive speech than both a neutral TTS and one where the expression was randomly selected. The experimental results also showed the advantage of a continuous expressive synthesis space over a discrete space.
Keywords :
speech synthesis; automatic expression prediction; expression labelling process; expression predictor; expressive speech; expressive synthesis; speech synthesiser; text to speech synthesis system; Hidden Markov models; Pragmatics; Speech; Speech synthesis; Training; Training data; Vectors; audiobook; cluster adaptive training; expressive speech synthesis; hidden Markov model; neural network;
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on
Conference_Location :
Vancouver, BC
DOI :
10.1109/ICASSP.2013.6639218