Title :
Patch to the Future: Unsupervised Visual Prediction
Author :
Walker, Julian ; Gupta, Arpan ; Hebert, Martial
Author_Institution :
Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA
Abstract :
In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances - how are appearances going to change with time. This yields a visual "hallucination" of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events, we also show that our approach is comparable to supervised methods for event prediction.
Keywords :
data visualisation; image motion analysis; learning (artificial intelligence); video signal processing; future event prediction; future event visualization; leraning; mid-level visual elements; motion prediction; temporal modeling; unsupervised visual prediction; video collection; visual appearance prediction; visual hallucination; Feature extraction; Prediction algorithms; Predictive models; Tracking; Training data; Videos; Visualization; Activity Forecasting; Prediction;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on
Conference_Location :
Columbus, OH
DOI :
10.1109/CVPR.2014.416