DocumentCode :
2912829
Title :
Learning effective human pose estimation from inaccurate annotation
Author :
Johnson, Sam ; Everingham, Mark
Author_Institution :
Sch. of Comput., Univ. of Leeds, Leeds, UK
fYear :
2011
fDate :
20-25 June 2011
Firstpage :
1465
Lastpage :
1472
Abstract :
The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.
Keywords :
pose estimation; 2D articulated human pose estimation; Amazon Mechanical Turk; body part appearance; human appearance; inaccurate annotation; learning effective human pose estimation; nonlinear classifiers; pose dependence; pose space; Estimation; Head; Humans; Image color analysis; Joints; Training; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Conference_Location :
Providence, RI
ISSN :
1063-6919
Print_ISBN :
978-1-4577-0394-2
Type :
conf
DOI :
10.1109/CVPR.2011.5995318
Filename :
5995318
Link To Document :
بازگشت