• DocumentCode
    3529987
  • Title

    Applying discretized articulatory knowledge to dysarthric speech

  • Author

    Rudzicz, Frank

  • Author_Institution
    Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON
  • fYear
    2009
  • fDate
    19-24 April 2009
  • Firstpage
    4501
  • Lastpage
    4504
  • Abstract
    This paper applies two dynamic Bayes networks that include theoretical and measured kinematic features of the vocal tract, respectively, to the task of labeling phoneme sequences in unsegmented dysarthric speech. Speaker dependent and adaptive versions of these models are compared against two acoustic-only baselines, namely a hidden Markov model and a latent dynamic conditional random field. Both theoretical and kinematic models of the vocal tract perform admirably on speaker-dependent speech, and we show that the statistics of the latter are not necessarily transferable between speakers during adaptation.
  • Keywords
    Bayes methods; speaker recognition; discretized articulatory knowledge; dynamic Bayes networks; dysarthric speech; hidden Markov model; latent dynamic conditional random field; phoneme sequences; speaker-dependent speech; vocal tract; Acoustic measurements; Electromagnetic measurements; Hidden Markov models; Kinematics; Labeling; Lips; Loudspeakers; Speech analysis; Speech enhancement; Tongue; Accessibility; articulatory information; conditional random fields; dynamic Bayes nets;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on
  • Conference_Location
    Taipei
  • ISSN
    1520-6149
  • Print_ISBN
    978-1-4244-2353-8
  • Electronic_ISBN
    1520-6149
  • Type

    conf

  • DOI
    10.1109/ICASSP.2009.4960630
  • Filename
    4960630