DocumentCode :
1208841
Title :
Combining Derivative and Parametric Kernels for Speaker Verification
Author :
Longworth, C. ; Gales, M.J.F.
Author_Institution :
Eng. Dept., Cambridge Univ., Cambridge
Volume :
17
Issue :
4
fYear :
2009
fDate :
5/1/2009 12:00:00 AM
Firstpage :
748
Lastpage :
757
Abstract :
Support vector machine-based speaker verification (SV) has become a standard approach in recent years. These systems typically use dynamic kernels to handle the dynamic nature of the speech utterances. This paper shows that many of these kernels fall into one of two general classes, derivative and parametric kernels. The attributes of these classes are contrasted and the conditions under which the two forms of kernel are identical are described. By avoiding these conditions, gains may be obtained by combining derivative and parametric kernels. One combination strategy is to combine at the kernel level. This paper describes a maximum-margin-based scheme for learning kernel weights for the SV task. Various dynamic kernels and combinations were evaluated on the NIST 2002 SRE task, including derivative and parametric kernels based upon different model structures. The best overall performance was 7.78% EER achieved when combining five kernels.
Keywords :
speaker recognition; support vector machines; dynamic kernel; maximum-margin-based scheme; parametric kernel; speaker recognition; support vector machine-based speaker verification; Kernel; Logistics; Maximum likelihood linear regression; NIST; Speaker recognition; Speech; Support vector machine classification; Support vector machines; Classifier combination; dynamic kernels; speaker recognition; support vector machines (SVMs);
fLanguage :
English
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1558-7916
Type :
jour
DOI :
10.1109/TASL.2008.2012193
Filename :
4806281
Link To Document :
بازگشت