• DocumentCode
    3672089
  • Title

    Delving into egocentric actions

  • Author

    Yin Li; Zhefan Ye;James M. Rehg

  • Author_Institution
    School of Interactive Computing, Georgia Institute of Technology, USA
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    287
  • Lastpage
    295
  • Abstract
    We address the challenging problem of recognizing the camera wearer´s actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.
  • Keywords
    "Videos","Cameras","Trajectory","Benchmark testing","Head","Visualization","Feature extraction"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
  • Electronic_ISBN
    1063-6919
  • Type

    conf

  • DOI
    10.1109/CVPR.2015.7298625
  • Filename
    7298625