DocumentCode
2212864
Title
Learning to look
Author
Butko, Nicholas J. ; Movellan, Javier R.
Author_Institution
Machine Perception Lab., UC San Diego, San Diego, CA, USA
fYear
2010
fDate
18-21 Aug. 2010
Firstpage
70
Lastpage
75
Abstract
How can autonomous agents with access to only their own sensory-motor experiences learn to look at visual targets? We explore this seemingly simple question, and find that naïve approaches are surprisingly brittle. Digging deeper, we show that learning to look at visual targets contains a deep, rich problem structure, relating sensory experience, motor experience, and development. By capturing this problem structure in a generative model, we show how a Bayesian observer should trade off different sources of uncertainty in order to discover how their sensors and actuators relate. We implement our approach on two very different robots, and show that both of them can quickly learn reliable intentional looking behavior without access to anything beyond their own experiences.
Keywords
actuators; image sensors; learning (artificial intelligence); mobile robots; robot kinematics; robot vision; autonomous agent; bayesian observer; naive approache; problem structure; robot; sensory motor experience; visual target; Cameras; Pixel; Robot kinematics; Robot vision systems;
fLanguage
English
Publisher
ieee
Conference_Titel
Development and Learning (ICDL), 2010 IEEE 9th International Conference on
Conference_Location
Ann Arbor, MI
Print_ISBN
978-1-4244-6900-0
Type
conf
DOI
10.1109/DEVLRN.2010.5578862
Filename
5578862
Link To Document