DocumentCode :
3748954
Title :
Unsupervised Extraction of Video Highlights via Robust Recurrent Auto-Encoders
Author :
Huan Yang;Baoyuan Wang;Stephen Lin;David Wipf;Minyi Guo;Baining Guo
Author_Institution :
Shanghai Jiao Tong Univ., Shanghai, China
fYear :
2015
Firstpage :
4633
Lastpage :
4641
Abstract :
With the growing popularity of short-form video sharing platforms such as Instagram and Vine, there has been an increasing need for techniques that automatically extract highlights from video. Whereas prior works have approached this problem with heuristic rules or supervised learning, we present an unsupervised learning approach that takes advantage of the abundance of user-edited videos on social media websites such as YouTube. Based on the idea that the most significant sub-events within a video class are commonly present among edited videos while less interesting ones appear less frequently, we identify the significant sub-events via a robust recurrent auto-encoder trained on a collection of user-edited videos queried for each particular class of interest. The auto-encoder is trained using a proposed shrinking exponential loss function that makes it robust to noise in the web-crawled training data, and is configured with bidirectional long short term memory (LSTM) [5] cells to better model the temporal structure of highlight segments. Different from supervised techniques, our method can infer highlights using only a set of downloaded edited videos, without also needing their pre-edited counterparts which are rarely available online. Extensive experiments indicate the promise of our proposed solution in this challenging unsupervised setting.
Keywords :
"Training","Feature extraction","Supervised learning","Robustness","Training data","Pipelines","Data models"
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN :
2380-7504
Type :
conf
DOI :
10.1109/ICCV.2015.526
Filename :
7410883
Link To Document :
بازگشت