Title :
Cone Tracing for Furry Object Rendering
Author :
Hao Qin ; Menglei Chai ; Qiming Hou ; Zhong Ren ; Kun Zhou
Author_Institution :
State Key Lab. of CAD&CG, Zhejiang Univ., Hangzhou, China
Abstract :
We present a cone-based ray tracing algorithm for high-quality rendering of furry objects with reflection, refraction and defocus effects. By aggregating many sampling rays in a pixel as a single cone, we significantly reduce the high supersampling rate required by the thin geometry of fur fibers. To reduce the cost of intersecting fur fibers with cones, we construct a bounding volume hierarchy for the fiber geometry to find the fibers potentially intersecting with cones, and use a set of connected ribbons to approximate the projections of these fibers on the image plane. The computational cost of compositing and filtering transparent samples within each cone is effectively reduced by approximating away in-cone variations of shading, opacity and occlusion. The result is a highly efficient ray tracing algorithm for furry objects which is able to render images of quality comparable to those generated by alternative methods, while significantly reducing the rendering time. We demonstrate the rendering quality and performance of our algorithm using several examples and a user study.
Keywords :
approximation theory; computational geometry; cost reduction; image sampling; ray tracing; rendering (computer graphics); bounding volume hierarchy; cone-based ray tracing algorithm; cost reduction; defocus effect; fiber geometry; furry object rendering; high supersampling rate reduction; occlusion; opacity; ray sampling; reflection effect; refraction effect; rendering time reduction; shading; transparent sample composition; transparent sample filtering; Geometry; Graphics processing units; Hair; Image segmentation; Lighting; Ray tracing; Rendering (computer graphics); Antialiasing; Ray tracing; Raytracing; antialiasing; cone tracing; depth of field; fur rendering; reflection; refraction; shadows;
Journal_Title :
Visualization and Computer Graphics, IEEE Transactions on
DOI :
10.1109/TVCG.2013.270