Title :
Using Spatial Audio Cues from Speech Excitation for Meeting Speech Segmentation
Author :
Cheng, Eva ; Burnett, Ian ; Ritz, Christian
Author_Institution :
Sch. of Electr. Comput., & Telecommun. Eng., Wollongong Univ., NSW
Abstract :
Multiparty meetings generally involve stationary participants. Participant location information can thus be used to segment the recorded meeting speech into each speaker´s ´turn´ for meeting ´browsing´. To represent speaker location information from speech, previous research showed that the most reliable time delay estimates are extracted from the Hubert envelope of the linear prediction residual signal. The authors´ past work has proposed the use of spatial audio cues to represent speaker location information. This paper proposes extracting spatial audio cues from the Hubert envelope of the speech residual for indicating changing speaker location for meeting speech segmentation. Experiments conducted on recordings of a real acoustic environment show that spatial cues from the Hubert envelope are more consistent across frequency subbands and can clearly distinguish between spatially distributed speakers, compared to spatial cues estimated from the recorded speech or residual signal
Keywords :
audio signal processing; speech processing; Hubert envelope; linear prediction residual signal; meeting speech segmentation; multiparty meetings; spatial audio cues; spatially distributed speakers; speaker location information; speech excitation; Acoustical engineering; Audio coding; Audio recording; Data mining; Delay effects; Delay estimation; Frequency estimation; Loudspeakers; Speech analysis; Telecommunication computing;
Conference_Titel :
Signal Processing, 2006 8th International Conference on
Conference_Location :
Beijing
Print_ISBN :
0-7803-9736-3
Electronic_ISBN :
0-7803-9736-3
DOI :
10.1109/ICOSP.2006.346086