• Title of article

    Multimodal person authentication using speech, face and visual speech

  • Author/Authors

    Palanivel، نويسنده , , S. and Yegnanarayana، نويسنده , , B.، نويسنده ,

  • Issue Information
    روزنامه با شماره پیاپی سال 2008
  • Pages
    12
  • From page
    44
  • To page
    55
  • Abstract
    This paper presents a method for automatic multimodal person authentication using speech, face and visual speech modalities. The proposed method uses the motion information to localize the face region, and the face region is processed in YCrCb color space to determine the locations of the eyes. The system models the nonlip region of the face using a Gaussian distribution, and it is used to estimate the center of the mouth. Facial and visual speech features are extracted using multiscale morphological erosion and dilation operations, respectively. The facial features are extracted relative to the locations of the eyes, and visual speech features are extracted relative to the locations of the eyes and mouth. Acoustic features are derived from the speech signal, and are represented by weighted linear prediction cepstral coefficients (WLPCC). Autoassociative neural network (AANN) models are used to capture the distribution of the extracted acoustic, facial and visual speech features. The evidence from speech, face and visual speech models are combined using a weighting rule, and the result is used to accept or reject the identity claim of the subject. The performance of the system is evaluated for newsreaders in TV broadcast news data, and the system achieves an equal error rate (EER) of about 0.45% for 50 subjects.
  • Keywords
    Multimodal person authentication , Face Tracking , Eye location , Multiscale morphological dilation and erosion , Visual speech , Autoassociative neural network
  • Journal title
    Computer Vision and Image Understanding
  • Serial Year
    2008
  • Journal title
    Computer Vision and Image Understanding
  • Record number

    1695193