• DocumentCode
    730841
  • Title

    Investigation of ensemble models for sequence learning

  • Author

    Celikyilmaz, Asli ; Hakkani-Tur, Dilek

  • fYear
    2015
  • fDate
    19-24 April 2015
  • Firstpage
    5381
  • Lastpage
    5385
  • Abstract
    While ensemble models have proven useful for sequence learning tasks there is relatively fewer work that provide insights into what makes them powerful. In this paper, we investigate the empirical behavior of the ensemble approaches on sequence modeling, specifically for the semantic tagging task. We explore this by comparing the performance of commonly used and easy to implement ensemble methods such as majority voting, linear combination and stacking to a learning based and rather complex ensemble method. Next, we ask the question: when models of different learning methods such as predictive and representation learning (e.g., deep learning) are aggregated, do we get performance gains over the individual baseline models. We explore these questions on a range of datasets on syntactic and semantic tagging tasks such as slot filling. Our findings show that a ranking based ensemble model outperforms all other well-known ensemble models.
  • Keywords
    speech processing; deep learning; ensemble models investigation; linear combination; majority voting; representation learning; semantic tagging tasks; sequence learning; sequence modeling; slot filling; syntactic tagging tasks; Learning systems; Predictive models; Semantics; Stacking; Syntactics; Tagging; Training; conditional random fields; ensemble learning; slot tagging; spoken language understanding;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
  • Conference_Location
    South Brisbane, QLD
  • Type

    conf

  • DOI
    10.1109/ICASSP.2015.7178999
  • Filename
    7178999