DocumentCode :
149845
Title :
Joint localization and fingerprinting of sound sources for auditory scene analysis
Author :
Kaghaz-Garan, Scott ; Umbarkar, Anurag ; Doboli, Alex
Author_Institution :
Dept. of Electr. & Comput. Eng., Stony Brook Univ., Stony Brook, NY, USA
fYear :
2014
fDate :
16-18 Oct. 2014
Firstpage :
49
Lastpage :
54
Abstract :
In the field of scene understanding, researchers have mainly focused on using video/images to extract different elements in a scene. The computational as well as monetary cost associated with such implementations is high. This paper proposes a low-cost system which uses sound-based techniques in order to jointly perform localization as well as fingerprinting of the sound sources. A network of embedded nodes is used to sense the sound inputs. Phase-based sound localization and Support-Vector Machine classification are used to locate and classify elements of the scene, respectively. The fusion of all this data presents a complete “picture” of the scene. The proposed concepts are applied to a vehicular-traffic case study. Experiments show that the system has a fingerprinting accuracy of up to 97.5%, localization error less than 4 degrees and scene prediction accuracy of 100%.
Keywords :
acoustic signal processing; pattern classification; sensor fusion; support vector machines; traffic engineering computing; auditory scene analysis; data fusion; embedded nodes; phase-based sound localization; scene element classification; sound source fingerprinting; sound source localization; sound-based techniques; support-vector machine classification; vehicular-traffic case study; Accuracy; Feature extraction; Image analysis; Sensors; Support vector machines; Testing; Vehicles;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Robotic and Sensors Environments (ROSE), 2014 IEEE International Symposium on
Conference_Location :
Timisoara
Print_ISBN :
978-1-4799-4927-4
Type :
conf
DOI :
10.1109/ROSE.2014.6952982
Filename :
6952982
Link To Document :
بازگشت