Title :
The use of speech and lip modalities for robust speaker verification under adverse conditions
Author :
Wark, T.J. ; Sridharan, S. ; Chandran, V.
Author_Institution :
Sch. of Electr. & Electron. Syst. Eng., Queensland Univ. of Technol., Brisbane, Qld., Australia
Abstract :
Investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. We have previously shown (Int. Conf. on Acoustics, Speech and Signal Proc., vol. 6, pp. 3693-3696, May 1998) that features extracted from a speaker´s moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms either subsystem individually. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise
Keywords :
feature extraction; image recognition; multimedia computing; noise; speaker recognition; adverse conditions; background noise; error rates; false acceptance rate; false rejection rate; feature extraction; lip modalities; lip movement; lip reading; performance optimization; robust speaker verification; speaker dependencies; speech features; speech modalities; weighting; Acoustic noise; Background noise; Biometrics; Data mining; Feature extraction; Hidden Markov models; Lips; Noise robustness; Speaker recognition; Speech enhancement;
Conference_Titel :
Multimedia Computing and Systems, 1999. IEEE International Conference on
Conference_Location :
Florence
Print_ISBN :
0-7695-0253-9
DOI :
10.1109/MMCS.1999.779305