Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera

July 01, 2018 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Robotics and Automation

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, criteria.py, dataloaders, download, helper.py, inverse_warp.py, main.py, metrics.py, model.py, vis_utils.py

Authors Fangchang Ma, Guilherme Venturelli Cavalheiro, Sertac Karaman arXiv ID 1807.00275 Category cs.CV: Computer Vision Cross-listed cs.AI, cs.LG, cs.RO Citations 445 Venue IEEE International Conference on Robotics and Automation Repository https://github.com/fangchangma/self-supervised-depth-completion โญ 648 Last Checked 1 month ago
Abstract
Depth completion, the technique of estimating a dense depth image from sparse depth measurements, has a variety of applications in robotics and autonomous driving. However, depth completion faces 3 main challenges: the irregularly spaced pattern in the sparse depth input, the difficulty in handling multiple sensor modalities (when color images are available), as well as the lack of dense, pixel-level ground truth depth labels. In this work, we address all these challenges. Specifically, we develop a deep regression model to learn a direct mapping from sparse depth (and color images) to dense depth. We also propose a self-supervised training framework that requires only sequences of color and sparse depth images, without the need for dense depth labels. Our experiments demonstrate that our network, when trained with semi-dense annotations, attains state-of-the- art accuracy and is the winning approach on the KITTI depth completion benchmark at the time of submission. Furthermore, the self-supervised framework outperforms a number of existing solutions trained with semi- dense annotations.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision