Depth image enhancement using local tangent plane approximations

Kiyoshi Matsuo, Yoshimitsu Aoki

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    38 Citations (Scopus)


    This paper describes a depth image enhancement method for consumer RGB-D cameras. Most existing methods use the pixel-coordinates of the aligned color image. Because the image plane generally has no relationship to the measured surfaces, the global coordinate system is not suitable to handle their local geometries. To improve enhancement accuracy, we use local tangent planes as local coordinates for the measured surfaces. Our method is composed of two steps, a calculation of the local tangents and surface reconstruction. To accurately estimate the local tangents, we propose a color heuristic calculation and an orientation correction using their positional relationships. Additionally, we propose a surface reconstruction method by ray-tracing to local tangents. In our method, accurate depth image enhancement is achieved by using the local geometries approximated by the local tangents. We demonstrate the effectiveness of our method using synthetic and real sensor data. Our method has a high completion rate and achieves the lowest errors in noisy cases when compared with existing techniques.

    Original languageEnglish
    Title of host publicationProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
    PublisherIEEE Computer Society
    Number of pages10
    ISBN (Print)9781467369640
    Publication statusPublished - 2015 Oct 14
    EventIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015 - Boston, United States
    Duration: 2015 Jun 72015 Jun 12


    OtherIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015
    Country/TerritoryUnited States

    ASJC Scopus subject areas

    • Software
    • Computer Vision and Pattern Recognition


    Dive into the research topics of 'Depth image enhancement using local tangent plane approximations'. Together they form a unique fingerprint.

    Cite this