DocumentCode :
254710
Title :
Temporally-Dependent Dirichlet Process Mixtures for Egocentric Video Segmentation
Author :
Barker, Joseph W. ; Davis, James W.
Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA
fYear :
2014
fDate :
23-28 June 2014
Firstpage :
571
Lastpage :
578
Abstract :
In this paper, we present a novel approach for segmenting video into large regions of generally similar activity. Based on the Dirichlet Process Multinomial Mixture model, we introduce temporal dependency into the inference algorithm, allowing our method to automatically create long segments with high saliency while ignoring small, inconsequential interruptions. We evaluate our algorithm and other topic models with both synthetic datasets and real-world video. Additionally, applicability to image segmentation is shown. Results show that our method outperforms related methods with respect to accuracy and noise removal.
Keywords :
image denoising; image segmentation; inference mechanisms; mixture models; video signal processing; egocentric video segmentation; image segmentation; inference algorithm; noise removal; temporal dependency; temporally-dependent Dirichlet process multinomial mixture model; topic models; Clustering algorithms; Hidden Markov models; Histograms; Image segmentation; Inference algorithms; Noise; Video sequences; Dirichlet Process; Video Segmentation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on
Conference_Location :
Columbus, OH
Type :
conf
DOI :
10.1109/CVPRW.2014.88
Filename :
6910037
Link To Document :
بازگشت