• DocumentCode
    667459
  • Title

    Keynote addresses: From auditory masking to binary classification: Machine learning for speech separation

  • Author

    Wang, DeLiang ; Martin, Rainer ; Vary, Peter ; Smaragdis, Paris

  • Author_Institution
    The Ohio State University, USA
  • fYear
    2013
  • fDate
    20-23 Oct. 2013
  • Firstpage
    1
  • Lastpage
    3
  • Abstract
    Speech separation, or the cocktail party problem, is a widely acknowledged challenge. Part of the challenge stems from the confusion of what the computational goal should be. While the separation of every sound source in a mixture is considered the gold standard, I argue that such an objective is neither realistic nor what the human auditory system does. Motivated by the auditory masking phenomenon, we have suggested instead the ideal time-frequency binary mask as a main goal for computational auditory scene analysis. This leads to a new formulation to speech separation that classifies time-frequency units into two classes: those dominated by the target speech and the rest. In supervised learning, a paramount issue is generalization to conditions unseen during training. I describe novel methods to deal with the generalization issue where support vector machines (SVMs) are used to estimate the ideal binary mask. One method employs distribution fitting to adapt to unseen signal-to-noise ratios and iterative voice activity detection to adapt to unseen noises. Another method learns more linearly separable features using deep neural networks (DNNs) and then couples DNN and linear SVM for training on a variety of noisy conditions. Systematic evaluations show high quality separation in new acoustic environments.
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Applications of Signal Processing to Audio and Acoustics (WASPAA), 2013 IEEE Workshop on
  • Conference_Location
    New Paltz, NY, USA
  • ISSN
    1931-1168
  • Type

    conf

  • DOI
    10.1109/WASPAA.2013.6701804
  • Filename
    6701804