• DocumentCode
    381284
  • Title

    Acoustic analysis and recognition of whispered speech

  • Author

    Itoh, Taisuke ; Takeda, Karuya ; Itakura, Fumitada

  • Author_Institution
    Center for Integrated Acoust. Inf. Res., Nagoya Univ., Japan
  • fYear
    2001
  • fDate
    2001
  • Firstpage
    429
  • Lastpage
    432
  • Abstract
    The acoustic properties and a recognition method of whispered speech are discussed. A whispered speech database that consists of whispered speech, normal speech and the corresponding facial video images of more than 6,000 sentences from 100 speakers was prepared. The comparison between whispered and normal utterances show that: 1) the cepstrum distance between them is 4 dB for voiced and 2 dB for unvoiced phonemes; 2) the spectral tilt of whispered speech is less sloped than for normal speech; 3) the frequency of the lower formants (below 1.5 kHz) is lower than that of normal speech. Acoustic models (HMM) trained by the whispered speech database attain an accuracy of 60% in syllable recognition experiments. This accuracy can be improved to 63% when MLLR (maximum likelihood linear regression) adaptation is applied, while the normal speech HMMs adapted with whispered speech attain only 56% syllable accuracy.
  • Keywords
    acoustic signal processing; cepstral analysis; hidden Markov models; speech recognition; HMM; MLLR adaptation; acoustic analysis; cepstrum distance; facial video images; formant frequency; maximum likelihood linear regression; normal speech; spectral tilt; syllable accuracy; whispered speech recognition; Cepstrum; Frequency; Hidden Markov models; Image databases; Loudspeakers; Maximum likelihood linear regression; Speech analysis; Speech processing; Speech recognition; Video recording;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Automatic Speech Recognition and Understanding, 2001. ASRU '01. IEEE Workshop on
  • Print_ISBN
    0-7803-7343-X
  • Type

    conf

  • DOI
    10.1109/ASRU.2001.1034676
  • Filename
    1034676