Title :
Domain invariant speech features using a new divergence measure
Author :
Wisler, Alan ; Berisha, Visar ; Liss, Julie ; Spanias, Andreas
Author_Institution :
Dept. of Speech & Hearing, Arizona State Univ., Tempe, AZ, USA
Abstract :
Existing speech classification algorithms often perform well when evaluated on training and test data drawn from the same distribution. In practice, however, these distributions are not always the same. In these circumstances, the performance of trained models will likely decrease. In this paper, we discuss an underutilized divergence measure and derive an estimable upper bound on the test error rate that depends on the error rate on the training data and the distance between training and test distributions. Using this bound as motivation, we develop a feature learning algorithm that aims to identify invariant speech features that generalize well to data similar to, but different from, the training set. Comparative results confirm the efficacy of the algorithm on a set of cross-domain speech classification tasks.
Keywords :
learning (artificial intelligence); speech processing; cross-domain speech classification tasks; divergence measure; domain invariant speech features; feature learning algorithm; speech classification algorithms; test distributions; test error rate; underutilized divergence measure; Abstracts; Degradation; Focusing; Labeling; Pathology; Speech; Domain Adaptation; Feature Selection; Machine Learning; Pathological Speech Analysis;
Conference_Titel :
Spoken Language Technology Workshop (SLT), 2014 IEEE
DOI :
10.1109/SLT.2014.7078553