Title :
Recognition of deformable object category and pose
Author :
Yinxiao Li ; Chih-Fan Chen ; Allen, Peter K.
Author_Institution :
Dept. of Comput. Sci., Columbia Univ., New York, NY, USA
fDate :
May 31 2014-June 7 2014
Abstract :
We present a novel method for classifying and estimating the categories and poses of deformable objects, such as clothing, from a set of depth images. The framework presented here represents the recognition part of the entire pipeline of dexterous manipulation of deformable objects, which contains grasping, recognition, regrasping, placing flat, and folding. We first create an off-line simulation of the deformable objects and capture depth images from different view points as training data. Then by extracting features and applying sparse coding and dictionary learning, we build up a codebook for a set of different poses of a particular deformable object category. The whole framework contains two layers which yield a robust system that first classifies deformable objects on category level and then estimates the current pose from a group of predefined poses of a single deformable object. The system is tested on a variety of similar deformable objects and achieves a high output accuracy. By knowing the current pose of the garment, we can continue with further tasks such as regrasping and folding.
Keywords :
dexterous manipulators; feature extraction; learning (artificial intelligence); object recognition; pose estimation; clothing; deformable object category; depth images; dexterous manipulation; dictionary learning; feature extraction; object recognition; off-line simulation; pose estimation; robust system; sparse coding; Clothing; Data models; Encoding; Grasping; Robots; Training; Vectors;
Conference_Titel :
Robotics and Automation (ICRA), 2014 IEEE International Conference on
Conference_Location :
Hong Kong
DOI :
10.1109/ICRA.2014.6907676