• DocumentCode
    1361792
  • Title

    Introduction to the Special Section on Deep Learning for Speech and Language Processing

  • Author

    Dong Yu ; Hinton, Geoffrey ; Morgan, Nigel ; Jen-Tzung Chien ; Sagayama, Shigeki

  • Author_Institution
    Microsoft Res., Redmond, WA, USA
  • Volume
    20
  • Issue
    1
  • fYear
    2012
  • Firstpage
    4
  • Lastpage
    6
  • Abstract
    Current speech recognition systems, for example, typically use Gaussian mixture models (GMMs), to estimate the observation (or emission) probabilities of hidden Markov models (HMMs), and GMMs are generative models that have only one layer of latent variables. Instead of developing more powerful models, most of the research effort has gone into finding better ways of estimating the GMM parameters so that error rates are decreased or the margin between different classes is increased. The same observation holds for natural language processing (NLP) in which maximum entropy (MaxEnt) models and conditional random fields (CRFs) have been popular for the last decade. Both of these approaches use shallow models whose success largely depends on the use of carefully handcrafted features.
  • Keywords
    error statistics; learning (artificial intelligence); maximum entropy methods; natural language processing; speech recognition; GMM parameter; Gaussian mixture model; MaxEnt model; conditional random field; hidden Markov model; maximum entropy (MaxEnt) model; natural language processing; speech processing; speech recognition system; Automatic speech recognition; Hidden Markov models; Machine learning; Special issues and sections; Speech recognition;
  • fLanguage
    English
  • Journal_Title
    Audio, Speech, and Language Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1558-7916
  • Type

    jour

  • DOI
    10.1109/TASL.2011.2173371
  • Filename
    6060895