DocumentCode :
661410
Title :
Multimodal person authentication system using features of utterance
Author :
Qian Shi ; Nishino, Takanori ; Kajikawa, Y.
Author_Institution :
Fac. of Eng. Sceince, Kansai Univ., Suita, Japan
fYear :
2013
fDate :
Oct. 29 2013-Nov. 1 2013
Firstpage :
1
Lastpage :
7
Abstract :
In this paper, we propose a biometrics authentication method using multimodal features in utterance. The multimodal features in utterance consists of lip shape (physical trait), lip motion pattern and voice pattern(behavioral trait). Therefore, the proposed method can be constructed with only a camera extracting lip area and voice without special equipment like other personal authentication methods. Moreover, the utterance phrase itself has a role of a key function by setting up an utterance phrase arbitrarily, and then the robustness of the authentication increases according to the phrase recognition which can reject an imposter with the feature similar to a registrant. In the proposed method, lip shape and voice features are extracted as edge or texture in the lip image and pitch or spectrum envelope in the voice signal. Experimental results demonstrate that the proposed method can improve the authentication accuracy compared with other methods based on the single modal.
Keywords :
biometrics (access control); feature extraction; pattern recognition; speech recognition; behavioral trait; biometrics authentication; camera; lip area; lip image; lip motion pattern; lip shape; multimodal person authentication system; personal authentication; phrase recognition; physical trait; utterance phrase; voice pattern; Accuracy; Authentication; Face; Feature extraction; Image edge detection; Mel frequency cepstral coefficient; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2013 Asia-Pacific
Conference_Location :
Kaohsiung
Type :
conf
DOI :
10.1109/APSIPA.2013.6694272
Filename :
6694272
Link To Document :
بازگشت