Title :
Shape Sparse Representation for Joint Object Classification and Segmentation
Author :
Fei Chen ; Huimin Yu ; Hu, Rose
Author_Institution :
Dept. of Inf. Sci. & Electron. Eng., Zhejiang Univ., Hangzhou, China
Abstract :
In this paper, a novel variational model based on prior shapes for simultaneous object classification and segmentation is proposed. Given a set of training shapes of multiple object classes, a sparse linear combination of training shapes in a low-dimensional representation is used to regularize the target shape in variational image segmentation. By minimizing the proposed variational functional, the model is able to automatically select the reference shapes that best represent the object by sparse recovery and accurately segment the image, taking into account both the image information and the shape priors. For some applications under an appropriate size of training set, the proposed model allows artificial enlargement of the training set by including a certain number of transformed shapes for transformation invariance, and then the model remains jointly convex and can handle the case of overlapping or multiple objects presented in an image within a small range. Numerical experiments show promising results and the potential of the method for object classification and segmentation.
Keywords :
image classification; image representation; image segmentation; artificial enlargement; multiple object classes; numerical experiments; object classification; object segmentation; shape sparse representation; sparse linear combination; variational image segmentation; Image segmentation; Joints; Level set; Probabilistic logic; Shape; Solid modeling; Training; Image segmentation; shape priors; sparse representation; variational formulations; Algorithms; Artificial Intelligence; Image Enhancement; Image Interpretation, Computer-Assisted; Imaging, Three-Dimensional; Pattern Recognition, Automated; Reproducibility of Results; Sensitivity and Specificity; Subtraction Technique;
Journal_Title :
Image Processing, IEEE Transactions on
DOI :
10.1109/TIP.2012.2226044