Abstract :
In this paper, we propose a hybrid representation of point-based models captured from calibrated depth images, through which the realtime rendering for the image-based models can be achieved. Our method starts from classification of input pixels of the input images into two categories, indicating the planar and non-planar surfaces respectively. For the planar surfaces, normal texture-mapping is applied. For the pixels corresponding to the nonplanar surfaces, we propose a method for obtaining some textured planes, called virtual planes, to represent the visual appearance of the nonplanar surfaces. A reconstructed virtual plane is a textured and partially transparent plane into which the input images are mapped using perspective projection. The RANSAC method is used to find the position of the textured planes, and optic flow measures are used to determine their textures. For other pixels corresponding to the non-planar surfaces with rich geometrical details, we use the point primitive to reconstruct the surface, where the redundant pixels are removed again through sampling rate comparison. Under this hybrid reconstruction, polygon texture mapping and point-based rendering are combined to produce the novel views, to take full advantages of the acceleration power of graphics hardware
Keywords :
computational geometry; image classification; image reconstruction; image texture; rendering (computer graphics); RANSAC method; calibrated depth image; optic flow measure; point-based rendering; polygon texture mapping; view-dependent rendering; virtual plane model; Fluid flow measurement; Geometrical optics; Image motion analysis; Image reconstruction; Pixel; Position measurement; Rendering (computer graphics); Sampling methods; Surface reconstruction; Surface texture;