DocumentCode
3748616
Title
Video Matting via Sparse and Low-Rank Representation
Author
Dongqing Zou;Xiaowu Chen;Guangying Cao;Xiaogang Wang
Author_Institution
State Key Lab. of Virtual Reality Technol. &
fYear
2015
Firstpage
1564
Lastpage
1572
Abstract
We introduce a novel method of video matting via sparse and low-rank representation. Previous matting methods [10, 9] introduced a nonlocal prior to estimate the alpha matte and have achieved impressive results on some data. However, on one hand, searching inadequate or excessive samples may miss good samples or introduce noise, on the other hand, it is difficult to construct consistent nonlocal structures for pixels with similar features, yielding spatially and temporally inconsistent video mattes. In this paper, we proposed a novel video matting method to achieve spatially and temporally consistent matting result. Toward this end, a sparse and low-rank representation model is introduced to pursue consistent nonlocal structures for pixels with similar features. The sparse representation is used to adaptively select best samples and accurately construct the nonlocal structures for all pixels, while the low-rank representation is used to globally ensure consistent nonlocal structures for pixels with similar features. The two representations are combined to generate consistent video mattes. Experimental results show that our method has achieved high quality results in a variety of challenging examples featuring illumination changes, feature ambiguity, topology changes, transparency variation, dis-occlusion, fast motion and motion blur.
Keywords
"Dictionaries","Optical imaging","Sparse matrices","Topology","Optical coupling","Laplace equations","Geometrical optics"
Publisher
ieee
Conference_Titel
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN
2380-7504
Type
conf
DOI
10.1109/ICCV.2015.183
Filename
7410540
Link To Document