• DocumentCode
    394238
  • Title

    The robustness of an almost-parsing language model given errorful training data

  • Author

    Wang, Wen ; Harper, Mary P. ; Stolcke, Andreas

  • Author_Institution
    Electr. & Comput. Eng., Purdue Univ., West Lafayette, IN, USA
  • Volume
    1
  • fYear
    2003
  • fDate
    6-10 April 2003
  • Abstract
    An almost-parsing language model has been developed (Wang and Harper 2002) that provides a framework for tightly integrating multiple knowledge sources. Lexical features and syntactic constraints are integrated into a uniform linguistic structure (called a SuperARV) that is associated with words in the lexicon. The SuperARV language model has been found able to reduce perplexity and word error rate (WER) compared to trigram, part-of-speech-based, and parser-based language models on the DARPA Wall Street Journal (WSJ) CSR task. In this paper we further investigate the robustness of the language model to possibly inconsistent and flawed training data, as well as its ability to scale up to sophisticated LVCSR tasks by comparing performance on the DARPA WSJ and Hub4 (Broadcast News) CSR tasks.
  • Keywords
    grammars; linguistics; natural languages; SuperARV; WER; almost-parsing language model; lexical features; lexicon; linguistic structure; multiple knowledge sources; parser-based language models; part-of-speech-based language models; perplexity; syntactic constraints; training data; trigram; word error rate; Broadcasting; Computer errors; Data engineering; Error analysis; Knowledge engineering; Laboratories; Natural languages; Robustness; Speech; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International Conference on
  • ISSN
    1520-6149
  • Print_ISBN
    0-7803-7663-3
  • Type

    conf

  • DOI
    10.1109/ICASSP.2003.1198762
  • Filename
    1198762