• DocumentCode
    329915
  • Title

    Deriving facial articulation models from image sequences

  • Author

    Tao, Hai ; Huang, Thomas S.

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Illinois Univ., Urbana, IL, USA
  • fYear
    1998
  • fDate
    4-7 Oct 1998
  • Firstpage
    158
  • Abstract
    Human facial articulation models are derived from frontal and side view image sequences using a connected vibrations non-rigid motion tracking algorithm. First, a 3D head geometric model is fitted to the face subject in the initial frame. The face model is masked with multiple planar membrane patches that are connected with each other. Then, the in-plate facial motions in the image sequences are computed from an over-determined system. Finally, this information is exploited for creating or customizing a facial articulation model
  • Keywords
    image motion analysis; image sequences; tracking; vibrations; video signal processing; 3D head geometric model; connected vibrations nonrigid motion tracking algorithm; face model; frontal view; human facial articulation models; image sequences; in-plate facial motions; multiple planar membrane patches; over-determined system; side view; video sequences; Biomembranes; Deformable models; Facial animation; Finite element methods; Head; Humans; Image sequences; Least squares approximation; Shape; Solid modeling; Speech; Tracking;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on
  • Conference_Location
    Chicago, IL
  • Print_ISBN
    0-8186-8821-1
  • Type

    conf

  • DOI
    10.1109/ICIP.1998.727158
  • Filename
    727158