DocumentCode
117398
Title
Multimodal human centric object recognition framework for personal robots
Author
Chandarr, Aswin ; Rudinac, Maja ; Jonker, Pieter
Author_Institution
Fac. of Mech. Maritime & Mater. Eng., Delft Univ. of Technol., Delft, Netherlands
fYear
2014
fDate
18-20 Nov. 2014
Firstpage
73
Lastpage
79
Abstract
In this paper we focus on a perception system for cognitive interaction between robots and humans especially for learning to recognize objects in household environments. Therefore we propose a novel three layered framework for object learning to bridge the gap between the robot´s recognition capabilities at lower neural level to the higher cognitive level of humans using the weighted fusion of multimodal sources like chromatic, structure and spatial information. In the first layer we propose the grounding of the raw sensory information into semantic concepts for each modality. We obtain a semantic color representation by using SLIC super-pixeling followed by a mapping learned from online images using a PLSA model. This results in a probability distribution over basic color names derived from cognitive linguistic studies. To represent structural information, we propose to cluster the ESF features obtained from Pointcloud data into primitive shape categories. This primitive shape knowledge is learned and expanded from the robot´s experience. For spatial information a metric map from the navigation system, demarcated into landmark locations is used. All these semantic representations are compliant with a human´s description of his environment and further used in the second layer to generate probabilistic knowledge about the objects using random forest classifiers. In the third layer, we propose a novel weighted fusion of the obtained object probabilities, where the weights are derived from the prior experience of the robot. We evaluate our system in realistic domestic conditions provided at a Robocup@Home setting.
Keywords
cognition; human-robot interaction; humanoid robots; image classification; image colour analysis; image representation; mobile robots; object recognition; path planning; robot vision; statistical distributions; ESF features; PLSA model; Pointcloud data; Robocup@Home setting; SLIC super-pixeling; chromatic information; household environments; human cognitive level; human-robot cognitive interaction; landmark locations; mapping; metric map; multimodal human centric object recognition framework; multimodal sources; navigation system; neural level; object probabilities; object recognition learning; online images; perception system; personal robots; primitive shape categories; primitive shape knowledge; probabilistic knowledge; probability distribution; random forest classifiers; robot recognition capabilities; semantic color representation; semantic representations; sensory information; spatial information; structure information; weighted fusion; Color; Image color analysis; Pragmatics; Robot sensing systems; Semantics; Shape;
fLanguage
English
Publisher
ieee
Conference_Titel
Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on
Conference_Location
Madrid
Type
conf
DOI
10.1109/HUMANOIDS.2014.7041340
Filename
7041340
Link To Document