DocumentCode
730694
Title
Deep neural networks employing Multi-Task Learning and stacked bottleneck features for speech synthesis
Author
Zhizheng Wu ; Valentini-Botinhao, Cassia ; Watts, Oliver ; King, Simon
Author_Institution
Centre for Speech Technol. Res., Univ. of Edinburgh, Edinburgh, UK
fYear
2015
fDate
19-24 April 2015
Firstpage
4460
Lastpage
4464
Abstract
Deep neural networks (DNNs) use a cascade of hidden representations to enable the learning of complex mappings from input to output features. They are able to learn the complex mapping from text-based linguistic features to speech acoustic features, and so perform text-to-speech synthesis. Recent results suggest that DNNs can produce more natural synthetic speech than conventional HMM-based statistical parametric systems. In this paper, we show that the hidden representation used within a DNN can be improved through the use of Multi-Task Learning, and that stacking multiple frames of hidden layer activations (stacked bottleneck features) also leads to improvements. Experimental results confirmed the effectiveness of the proposed methods, and in listening tests we find that stacked bottleneck features in particular offer a significant improvement over both a baseline DNN and a benchmark HMM system.
Keywords
learning (artificial intelligence); neural nets; speech synthesis; complex mapping; deep neural networks; hidden representation; multitask learning; speech acoustic feature; stacked bottleneck features; text based linguistic feature; text-to-speech synthesis; Acoustics; Context; Hidden Markov models; Neural networks; Pragmatics; Speech; Speech synthesis; Speech synthesis; acoustic model; bottleneck feature; deep neural network; multi-task learning;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location
South Brisbane, QLD
Type
conf
DOI
10.1109/ICASSP.2015.7178814
Filename
7178814
Link To Document