• DocumentCode
    3672075
  • Title

    Data-driven depth map refinement via multi-scale sparse representation

  • Author

    HyeokHyen Kwon; Yu-Wing Tai;Stephen Lin

  • Author_Institution
    KAIST, Korea
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    159
  • Lastpage
    167
  • Abstract
    Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization. In this paper, we present a data-driven approach for refining degraded RAWdepth maps that are coupled with an RGB image. The key idea of our approach is to take advantage of a training set of high-quality depth data and transfer its information to the RAW depth map through multi-scale dictionary learning. Utilizing a sparse representation, our method learns a dictionary of geometric primitives which captures the correlation between high-quality mesh data, RAW depth maps and RGB images. The dictionary is learned and applied in a manner that accounts for various practical issues that arise in dictionary-based depth refinement. Compared to previous approaches that only utilize the correlation between RAW depth maps and RGB images, our method produces improved depth maps without over-smoothing. Since our approach is data driven, the refinement can be targeted to a specific class of objects by employing a corresponding training set. In our experiments, we show that this leads to additional improvements in recovering depth maps of human faces.
  • Keywords
    "Dictionaries","Degradation","Image reconstruction","Image edge detection","Noise","Training","Image resolution"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
  • Electronic_ISBN
    1063-6919
  • Type

    conf

  • DOI
    10.1109/CVPR.2015.7298611
  • Filename
    7298611