DocumentCode :
106555
Title :
Virtual Mirror Rendering With Stationary RGB-D Cameras and Stored 3-D Background
Author :
Ju Shen ; Po-Chang Su ; Cheung, S.S. ; Jian Zhao
Author_Institution :
Center for Visualization & Virtual Environments, Univ. of Kentucky, Lexington, KY, USA
Volume :
22
Issue :
9
fYear :
2013
fDate :
Sept. 2013
Firstpage :
3433
Lastpage :
3448
Abstract :
Mirrors are indispensable objects in our lives. The capability of simulating a mirror on a computer display, augmented with virtual scenes and objects, opens the door to many interesting and useful applications from fashion design to medical interventions. Realistic simulation of a mirror is challenging as it requires accurate viewpoint tracking and rendering, wide-angle viewing of the environment, as well as real-time performance to provide immediate visual feedback. In this paper, we propose a virtual mirror rendering system using a network of commodity structured-light RGB-D cameras. The depth information provided by the RGB-D cameras can be used to track the viewpoint and render the scene from different prospectives. Missing and erroneous depth measurements are common problems with structured-light cameras. A novel depth denoising and completion algorithm is proposed in which the noise removal and interpolation procedures are guided by the foreground/background label at each pixel. The foreground/background label is estimated using a probabilistic graphical model that considers color, depth, background modeling, depth noise modeling, and spatial constraints. The wide viewing angle of the mirror system is realized by combining the dynamic scene, captured by the static camera network with a 3-D background model created off-line, using a color-depth sequence captured by a movable RGB-D camera. To ensure a real-time response, a scalable client-and-server architecture is used with the 3-D point cloud processing, the viewpoint estimate, and the mirror image rendering are all done on the client side. The mirror image and the viewpoint estimate are then sent to the server for final mirror view synthesis and viewpoint refinement. Experimental results are presented to show the accuracy and effectiveness of each component and the entire system.
Keywords :
augmented reality; cameras; client-server systems; image colour analysis; image denoising; image sequences; interpolation; mirrors; probability; rendering (computer graphics); 3D point cloud processing; color-depth sequence; commodity structured-light RGB-D cameras; completion algorithm; computer display; depth denoising; depth information; depth noise modeling; dynamic scene; erroneous depth measurements; foreground-background label; interpolation procedures; mirror image rendering; mirror simulation; mirror view synthesis; movable RGB-D camera; noise removal; probabilistic graphical model; scalable client-and-server architecture; spatial constraints; static camera network; stored 3D background modelling; structured-light cameras; viewpoint estimate; viewpoint refinement; viewpoint tracking; virtual mirror rendering system; virtual scenes; visual feedback; wide viewing angle; wide-angle viewing; 3-D scene scanning; Markov random field; Mirrors; RGB-D system; client-server systems; depth image denoising; image denoising; image reconstruction;
fLanguage :
English
Journal_Title :
Image Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
1057-7149
Type :
jour
DOI :
10.1109/TIP.2013.2268941
Filename :
6532397
Link To Document :
بازگشت