DocumentCode :
1576161
Title :
Learning of audiovisual integration
Author :
Yan, Rujiao ; Rodemann, Tobias ; Wrede, Britta
Author_Institution :
Res. Inst. for Cognition & Robot. (CoR-Lab.), Bielefeld Univ., Bielefeld, Germany
Volume :
2
fYear :
2011
Firstpage :
1
Lastpage :
7
Abstract :
We present a system for learning audiovisual integration based on temporal and spatial coincidence. The current sound is sometimes related to a visual signal that has not yet been seen, we consider this situation as well. Our learning algorithm is tested in online adaptation of audio-motor maps. Since audio-motor maps are not reliable at the beginning of the experiment, learning is bootstrapped using temporal coincidence when there is only one auditory and one visual stimulus. In the course of time, the system can automatically decide to use both spatial and temporal coincidence depending on the quality of maps and the number of visual sources. We can show that this audio-visual integration can work when more than one visual source appears. The integration performance does not decrease when the related visual source has not yet been spotted. The experiment is executed on a humanoid robot head.
Keywords :
audio signal processing; humanoid robots; robot vision; statistical analysis; audio-motor maps; audio-visual integration learning; bootstrapped learning; humanoid robot head; learning algorithm; spatial coincidence; temporal coincidence; visual signal; Robot kinematics; Tracking;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Development and Learning (ICDL), 2011 IEEE International Conference on
Conference_Location :
Frankfurt am Main
ISSN :
2161-9476
Print_ISBN :
978-1-61284-989-8
Type :
conf
DOI :
10.1109/DEVLRN.2011.6037323
Filename :
6037323
Link To Document :
بازگشت