DocumentCode
3672477
Title
Parsing occluded people by flexible compositions
Author
Xianjie Chen;Alan Yuille
Author_Institution
University of California, Los Angeles, 90095, United States
fYear
2015
fDate
6/1/2015 12:00:00 AM
Firstpage
3945
Lastpage
3954
Abstract
This paper presents an approach to parsing humans when there is significant occlusion. We model humans using a graphical model which has a tree structure building on recent work [32, 6] and exploit the connectivity prior that, even in presence of occlusion, the visible nodes form a connected subtree of the graphical model. We call each connected subtree a flexible composition of object parts. This involves a novel method for learning occlusion cues. During inference we need to search over a mixture of different flexible models. By exploiting part sharing, we show that this inference can be done extremely efficiently requiring only twice as many computations as searching for the entire object (i.e., not modeling occlusion). We evaluate our model on the standard benchmarked “We Are Family” Stickmen dataset and obtain significant performance improvements over the best alternative algorithms.
Keywords
"Graphical models","Standards","Elbow","Computational modeling","Wrist","Inference algorithms","Couplings"
Publisher
ieee
Conference_Titel
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN
1063-6919
Type
conf
DOI
10.1109/CVPR.2015.7299020
Filename
7299020
Link To Document