Title :
Speaker identification in emotional talking environments using both gender and emotion cues
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Sharjah, Sharjah, United Arab Emirates
Abstract :
Speaker recognition performance is usually very high in neutral talking environments; however, the performance is significantly degraded in emotional talking environments. This work is devoted to proposing, implementing, and evaluating a new approach to enhance the degraded performance of text-independent speaker identification in emotional talking environments. The new proposed approach is based on identifying the unknown speaker using both his/her gender and emotion cues using Hidden Markov Models (HMMs) as classifiers. This approach has been tested on our collected speech database. The results of this work show that speaker identification performance based on using both gender and emotion cues is higher than that based on using gender cues only, emotion cues only, and neither gender nor emotion cues. The results obtained based on the new proposed approach are close to those obtained in subjective evaluation by human judges.
Keywords :
emotion recognition; hidden Markov models; speaker recognition; HMM; collected speech database; emotion cues; emotional talking environments; gender; hidden Markov models; neutral talking environments; speaker identification; speaker recognition performance; text-independent speaker identification; Databases; Emotion recognition; Hidden Markov models; Speech; Speech recognition; emotion recognition; gender recognition; hidden Markov models; speaker recognition;
Conference_Titel :
Communications, Signal Processing, and their Applications (ICCSPA), 2013 1st International Conference on
Conference_Location :
Sharjah
Print_ISBN :
978-1-4673-2820-3
DOI :
10.1109/ICCSPA.2013.6487314