DocumentCode :
2954347
Title :
Understanding egocentric activities
Author :
Fathi, Alireza ; Farhadi, Ali ; Rehg, James M.
Author_Institution :
Coll. of Comput., Georgia Inst. of Technol., Atlanta, GA, USA
fYear :
2011
fDate :
6-13 Nov. 2011
Firstpage :
407
Lastpage :
414
Abstract :
We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.
Keywords :
gesture recognition; inference mechanisms; object detection; object recognition; activity recognition methods; bag of words; consistent appearance; daily activity; egocentric camera; hand detection; inference; joint modeling; meal preparation; object detection; object-hand interactions; pre-trained detectors; standard activity representations; superior performance; understanding egocentric activity; Principal component analysis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2011 IEEE International Conference on
Conference_Location :
Barcelona
ISSN :
1550-5499
Print_ISBN :
978-1-4577-1101-5
Type :
conf
DOI :
10.1109/ICCV.2011.6126269
Filename :
6126269
Link To Document :
بازگشت