• DocumentCode
    2861469
  • Title

    Multimodal human emotion/expression recognition

  • Author

    Chen, Lawrence S. ; Huang, Thomas S. ; Miyasato, Tsutomu ; Nakatsu, Ryohei

  • Author_Institution
    Beckman Inst. for Adv. Sci. & Technol., Illinois Univ., Urbana, IL, USA
  • fYear
    1998
  • fDate
    14-16 Apr 1998
  • Firstpage
    366
  • Lastpage
    371
  • Abstract
    Recognizing human facial expression and emotion by computer is an interesting and challenging problem. Many have investigated emotional contents in speech alone, or recognition of human facial expressions solely from images. However, relatively little has been done in combining these two modalities for recognizing human emotions. L.C. De Silva et al. (1997) studied human subjects´ ability to recognize emotions from viewing video clips of facial expressions and listening to the corresponding emotional speech stimuli. They found that humans recognize some emotions better by audio information, and other emotions better by video. They also proposed an algorithm to integrate both kinds of inputs to mimic human´s recognition process. While attempting to implement the algorithm, we encountered difficulties which led us to a different approach. We found these two modalities to be complimentary. By using both, we show it is possible to achieve higher recognition rates than either modality alone
  • Keywords
    face recognition; man-machine systems; audio information; human facial expression; multimodal human emotion/expression recognition; Cameras; Clustering algorithms; Emotion recognition; Face recognition; Humans; Image recognition; Laboratories; Microphones; Speech recognition; Telecommunication computing;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on
  • Conference_Location
    Nara
  • Print_ISBN
    0-8186-8344-9
  • Type

    conf

  • DOI
    10.1109/AFGR.1998.670976
  • Filename
    670976