DocumentCode
3038063
Title
Speech synthesis from surface electromyogram signal
Author
Lam, Yuet-Ming ; Mak, Man-Wai ; Leong, Philip Heng-Wai
Author_Institution
Dept. of Comput. Sci. & Eng., Chinese Univ. of Hong Kong
fYear
2005
fDate
21-21 Dec. 2005
Firstpage
749
Lastpage
754
Abstract
This paper presents a methodology that uses surface electromyogram (SEMG) signals recorded from the cheek and chin to synthesize speech. Simultaneously recorded speech and SEMG signals are blocked into frames and transformed into features. Linear predictive coding (LPC) and short-time Fourier transform coefficients are chosen as speech and SEMG features respectively. A neural network is applied to convert SEMG features into speech features on a frame-by-frame basis. The converted speech features are used to reconstruct the original speech. Feature selection, conversion methodology and experimental results are discussed. The results show that phoneme-based feature extraction and frame-based feature conversion could be applied to SEMG-based continuous speech synthesis
Keywords
Fourier transforms; electromyography; linear predictive coding; speech coding; speech synthesis; frame-based feature conversion; linear predictive coding; neural network; phoneme-based feature extraction; short-time Fourier transform coefficients; speech synthesis; surface electromyogram signal; Feature extraction; Fourier transforms; Linear predictive coding; Network synthesis; Neural networks; Signal synthesis; Speech coding; Speech recognition; Speech synthesis; Working environment noise;
fLanguage
English
Publisher
ieee
Conference_Titel
Signal Processing and Information Technology, 2005. Proceedings of the Fifth IEEE International Symposium on
Conference_Location
Athens
Print_ISBN
0-7803-9313-9
Type
conf
DOI
10.1109/ISSPIT.2005.1577192
Filename
1577192
Link To Document