Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

February 25, 2019 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Robotics and Automation

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, __init__.py, command.sh, common_utils.py, data, data_loader.py, deep_slam.py, demo.py, geo_utils.py, kitti_eval, nets.py, preprocess_matches.py, test_depth.py, test_kitti_depth.py, test_kitti_pose.py, train.py

Authors Tianwei Shen, Zixin Luo, Lei Zhou, Hanyu Deng, Runze Zhang, Tian Fang, Long Quan arXiv ID 1902.09103 Category cs.CV: Computer Vision Cross-listed cs.RO Citations 82 Venue IEEE International Conference on Robotics and Automation Repository https://github.com/hlzz/DeepMatchVO โญ 203 Last Checked 1 month ago
Abstract
Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at https://github.com/hlzz/DeepMatchVO.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision