Title :
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots
Author :
Chao-Yeh Chen ; Grauman, Kristen
Author_Institution :
Univ. of Texas at Austin, Austin, TX, USA
Abstract :
We propose an approach to learn action categories from static images that leverages prior observations of generic human motion to augment its training process. Using unlabeled video containing various human activities, the system first learns how body pose tends to change locally in time. Then, given a small number of labeled static images, it uses that model to extrapolate beyond the given exemplars and generate "synthetic" training examples-poses that could link the observed images and/or immediately precede or follow them in time. In this way, we expand the training set without requiring additional manually labeled examples. We explore both example-based and manifold-based methods to implement our idea. Applying our approach to recognize actions in both images and video, we show it enhances a state-of-the-art technique when very few labeled training examples are available.
Keywords :
computer based training; image motion analysis; video signal processing; human activities; learn action categories; learn new human actions; observed images; static images; training process; very few labeled snapshots; watching unlabeled video; Data models; Image recognition; Labeling; Manifolds; Support vector machines; Testing; Training;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
Conference_Location :
Portland, OR
DOI :
10.1109/CVPR.2013.80