Title :
Removing Motion Blur With Space–Time Processing
Author :
Takeda, H. ; Milanfar, Peyman
Author_Institution :
Electr. Eng. Dept., Univ. of California, Santa Cruz, CA, USA
Abstract :
Although spatial deblurring is relatively well understood by assuming that the blur kernel is shift invariant, motion blur is not so when we attempt to deconvolve on a frame-by-frame basis: this is because, in general, videos include complex, multilayer transitions. Indeed, we face an exceedingly difficult problem in motion deblurring of a single frame when the scene contains motion occlusions. Instead of deblurring video frames individually, a fully 3-D deblurring method is proposed in this paper to reduce motion blur from a single motion-blurred video to produce a high-resolution video in both space and time. Unlike other existing approaches, the proposed deblurring kernel is free from knowledge of the local motions. Most importantly, due to its inherent locally adaptive nature, the 3-D deblurring is capable of automatically deblurring the portions of the sequence, which are motion blurred, without segmentation and without adversely affecting the rest of the spatiotemporal domain, where such blur is not present. Our method is a two-step approach; first we upscale the input video in space and time without explicit estimates of local motions, and then perform 3-D deblurring to obtain the restored sequence.
Keywords :
image motion analysis; image restoration; image sequences; video signal processing; 3D deblurring method; blur kernel; frame-by-frame basis; motion blur removal; motion deblurring; motion occlusion; sequence restoration; space-time processing; spatial deblurring; spatiotemporal domain; video frames deblurring; Cameras; Image segmentation; Kernel; Motion estimation; Motion segmentation; Spatial resolution; Videos; Inverse filtering; sharpening and deblurring;
Journal_Title :
Image Processing, IEEE Transactions on
DOI :
10.1109/TIP.2011.2131666