Title :
Voice conversion using duration-embedded bi-HMMs for expressive speech synthesis
Author :
Wu, Chung-Hsien ; Hsia, Chi-Chun ; Liu, Te-Hsien ; Wang, Jhing-Fa
Author_Institution :
Dept. of Comput. Sci. & Inf. Eng., Nat. Cheng Kung Univ., Tainan
fDate :
7/1/2006 12:00:00 AM
Abstract :
This paper presents an expressive voice conversion model (DeBi-HMM) as the post processing of a text-to-speech (TTS) system for expressive speech synthesis. DeBi-HMM is named for its duration-embedded characteristic of the two HMMs for modeling the source and target speech signals, respectively. Joint estimation of source and target HMMs is exploited for spectrum conversion from neutral to expressive speech. Gamma distribution is embedded as the duration model for each state in source and target HMMs. The expressive style-dependent decision trees achieve prosodic conversion. The STRAIGHT algorithm is adopted for the analysis and synthesis process. A set of small-sized speech databases for each expressive style is designed and collected to train the DeBi-HMM voice conversion models. Several experiments with statistical hypothesis testing are conducted to evaluate the quality of synthetic speech as perceived by human subjects. Compared with previous voice conversion methods, the proposed method exhibits encouraging potential in expressive speech synthesis
Keywords :
decision trees; gamma distribution; hidden Markov models; speech synthesis; statistical testing; duration-embedded Bi-HMM; expressive speech synthesis; gamma distribution; prosodic conversion; spectrum conversion; speech databases; speech signals; statistical hypothesis testing; straight algorithm; style-dependent decision trees; text-to-speech post processing; voice conversion; Algorithm design and analysis; Computer science; Decision trees; Hidden Markov models; Humans; Signal synthesis; Spatial databases; Speech analysis; Speech synthesis; Testing; Bi-HMM voice conversion; embedded duration model; expressive speech synthesis; prosody conversion;
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2006.876112