DocumentCode
313633
Title
Modelling the perceptual separation of concurrent vowels with a network of neural oscillators
Author
Brown, Guy J. ; Wang, DeLiang
Author_Institution
Dept. of Comput. Sci., Sheffield Univ., UK
Volume
1
fYear
1997
fDate
9-12 Jun 1997
Firstpage
569
Abstract
The ability of listeners to identify two simultaneously presented vowels is improved by introducing a difference in fundamental frequency between the vowels. We propose an explanation for this phenomenon in the form of a computational model of concurrent sound segregation, which is motivated by neurophysiological evidence of oscillatory firing activity in the higher auditory system. In the model, the perceptual grouping of auditory peripheral channels is coded by synchronised oscillations in a neural oscillator network. Computer simulations confirm that the model qualitatively matches the double vowel identification performance of human listeners
Keywords
acoustic signal processing; auditory evoked potentials; biology computing; digital simulation; neural nets; neurophysiology; physiological models; speech recognition; auditory peripheral channels; concurrent sound segregation; concurrent vowels; double vowel identification performance; fundamental frequency; higher auditory system; human listeners; neural oscillators; oscillatory firing activity; perceptual grouping; perceptual separation; synchronised oscillations; Auditory system; Band pass filters; Channel bank filters; Ear; Frequency synchronization; Hair; Image analysis; Oscillators; Signal processing; Speech recognition;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks,1997., International Conference on
Conference_Location
Houston, TX
Print_ISBN
0-7803-4122-8
Type
conf
DOI
10.1109/ICNN.1997.611732
Filename
611732
Link To Document