DocumentCode :
117534
Title :
Learning to disambiguate object hypotheses through self-exploration
Author :
Bjorkman, Marten ; Bekiroglu, Yasemin
Author_Institution :
Centre for Autonomous Systems and the Computer Vision and Active Perception Lab, CSC, KTH Royal Institute of Technology, Stockholm, Sweden
fYear :
2014
fDate :
18-20 Nov. 2014
Firstpage :
560
Lastpage :
565
Abstract :
We present a probabilistic learning framework to form object hypotheses through interaction with the environment. A robot learns how to manipulate objects through pushing actions to identify how many objects are present in the scene. We use a segmentation system that initializes object hypotheses based on RGBD data and adopt a reinforcement approach to learn the relations between pushing actions and their effects on object segmentations. Trained models are used to generate actions that result in minimum number of pushes on object groups, until either object separation events are observed or it is ensured that there is only one object acted on. We provide baseline experiments that show that a policy based on reinforcement learning for action selection results in fewer pushes, than if pushing actions were selected randomly.
Keywords :
image segmentation; learning (artificial intelligence); probability; robot vision; RGBD data; action selection; object hypothesis disambiguation; object manipulation; object segmentation system; object separation events; probabilistic learning framework; pushing actions; reinforcement approach; reinforcement learning; robot; segmentation system; self-exploration; Gaussian processes; Image segmentation; Learning (artificial intelligence); Robot sensing systems; Shape; Three-dimensional displays;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on
Conference_Location :
Madrid
Type :
conf
DOI :
10.1109/HUMANOIDS.2014.7041418
Filename :
7041418
Link To Document :
بازگشت