DocumentCode
328196
Title
Perception of sound direction by auditory neural network model using pulse transmission-extraction of inter-aural time and level difference
Author
Kuroyanagi, Susumu ; Iwata, Akira
Author_Institution
Dept. of Electr. & Comput. Eng., Nagoya Inst. of Technol., Japan
Volume
1
fYear
1993
fDate
25-29 Oct. 1993
Firstpage
77
Abstract
A novel neural network model for percepting sound direction has been proposed. Our perception model is based on the physiological auditory nervous system in human brain. The model is divided roughly into three sections: preprocessing for input signals; transforming continuous signals to pulse trains; and extracting of features. The last section are implemented using pulse neuron model. A computer simulation has demonstrated that time and level differences between two random signals has been successfully extracted by the model.
Keywords
auditory evoked potentials; feature extraction; hearing; neural nets; neurophysiology; physiological models; auditory neural network model; feature extraction; human brain; input signal processing; inter-aural time extraction; perception model; pulse neuron model; pulse trains; pulse transmission; sound direction perception; Biological neural networks; Biomembranes; Data mining; Ear; Feature extraction; Humans; Nervous system; Neural networks; Neurons; Pulse generation;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 1993. IJCNN '93-Nagoya. Proceedings of 1993 International Joint Conference on
Print_ISBN
0-7803-1421-2
Type
conf
DOI
10.1109/IJCNN.1993.713863
Filename
713863
Link To Document