DocumentCode :
668746
Title :
Classification-based close talk speech enhancement
Author :
Yi Jiang ; Yuanyuan Zu ; Xi Lu ; Hong Zhou
Author_Institution :
Dept. of Electron. Eng., Tsinghua Univ., Beijing, China
fYear :
2013
fDate :
20-22 Nov. 2013
Firstpage :
192
Lastpage :
195
Abstract :
This paper addresses the problem of close talk speech enhancement as a binary classification using dual microphones features in noisy and reverberant environments. In this work, we investigate a speech segregation framework, in which deep neural networks (DNN) are employed as a mechanism to find the robustness classifier from two microphones inputs. The paper reports the successful attempt to use dual microphones signals and energy difference features and monaural features as the segregation cues with this framework. Results with recording corpus show that robust performance can be achieved across a variety of multi location, noise types and reverberant conditions.
Keywords :
neural nets; signal classification; speech enhancement; binary classification; close talk speech enhancement; deep neural network; dual microphone; monaural feature; noisy environment; recording corpus; reverberant condition; reverberant environment; segregation cue; speech segregation; Feature extraction; Filter banks; Microphones; Signal to noise ratio; Speech; Speech enhancement; Computational auditory scene analysis (CASA); close talk; deep neural networks (DNN); speech enhancement;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Consumer Electronics, Communications and Networks (CECNet), 2013 3rd International Conference on
Conference_Location :
Xianning
Print_ISBN :
978-1-4799-2859-0
Type :
conf
DOI :
10.1109/CECNet.2013.6703304
Filename :
6703304
Link To Document :
بازگشت