• DocumentCode
    463719
  • Title

    Prosody-Driven Head-Gesture Animation

  • Author

    Sargin, M.E. ; Erzin, E. ; Yemez, Y. ; Tekalp, A. Murat ; Erdem, Arif Tanju ; Erdem, C. ; Ozkan, Mehmed

  • Author_Institution
    Lab. of Multimedia Vision & Graphics, Koc Univ., Istanbul, Turkey
  • Volume
    2
  • fYear
    2007
  • fDate
    15-20 April 2007
  • Abstract
    We present a new framework for joint analysis of head gesture and speech prosody patterns of a speaker towards automatic realistic synthesis of head gestures from speech prosody. The proposed two-stage analysis aims to "learn" both elementary prosody and head gesture patterns for a particular speaker, as well as the correlations between these head gesture and prosody patterns from a training video sequence. The resulting audio-visual mapping model is then employed to synthesize natural head gestures from arbitrary input test speech given a head model for the speaker. Objective and subjective evaluations indicate that the proposed synthesis by analysis scheme provides natural looking head gestures for the speaker with any input test speech.
  • Keywords
    audio-visual systems; computer animation; gesture recognition; speaker recognition; speech processing; speech synthesis; audio-visual mapping model; elementary prosody; joint analysis; prosody-driven head-gesture animation; speech prosody patterns; two-stage analysis; video sequence; Animation; Feature extraction; Hidden Markov models; Network synthesis; Pattern analysis; Speech analysis; Speech synthesis; Streaming media; Testing; Video sequences; Man-machine systems; gesture and prosody analysis; gesture synthesis; multimedia systems;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on
  • Conference_Location
    Honolulu, HI
  • ISSN
    1520-6149
  • Print_ISBN
    1-4244-0727-3
  • Type

    conf

  • DOI
    10.1109/ICASSP.2007.366326
  • Filename
    4217499