• DocumentCode
    138356
  • Title

    What´s in the container? Classifying object contents from vision and touch

  • Author

    Guler, Puren ; Bekiroglu, Yasemin ; Gratal, Xavi ; Pauwels, Karl ; Kragic, Danica

  • Author_Institution
    Center for Autonomous Syst., KTH R. Inst. of Technol., Stockholm, Sweden
  • fYear
    2014
  • fDate
    14-18 Sept. 2014
  • Firstpage
    3961
  • Lastpage
    3968
  • Abstract
    Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.
  • Keywords
    image classification; manipulators; robot vision; bimodal sensory data; cardboard containers; manipulators; object content classification; robot grasping; tactile feedback; unimodal sensory data; visual feedback; Containers; Grasping; Robot sensing systems; Visualization;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on
  • Conference_Location
    Chicago, IL
  • Type

    conf

  • DOI
    10.1109/IROS.2014.6943119
  • Filename
    6943119