How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

September 09, 2017 ยท Entered Twilight ยท ๐Ÿ› IEEE Robotics and Automation Letters

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, cat_net, compute_localization_errors.py, make_localization_data.py, pipeline.png, run_cat_ethl.py, run_cat_vkitti.py, run_localization_ethl.py, run_localization_vkitti.py, tools

Authors Lee Clement, Jonathan Kelly arXiv ID 1709.03009 Category cs.RO: Robotics Cross-listed cs.CV Citations 26 Venue IEEE Robotics and Automation Letters Repository https://github.com/utiasSTARS/cat-net โญ 52 Last Checked 1 month ago
Abstract
Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power. The competitive accuracy and robustness of these algorithms compared to state-of-the-art feature-based methods, as well as their natural ability to yield dense maps, makes them an appealing choice for a variety of mobile robotics applications. However, direct methods remain brittle in the face of appearance change due to their underlying assumption of photometric consistency, which is commonly violated in practice. In this paper, we propose to mitigate this problem by training deep convolutional encoder-decoder models to transform images of a scene such that they correspond to a previously-seen canonical appearance. We validate our method in multiple environments and illumination conditions using high-fidelity synthetic RGB-D datasets, and integrate the trained models into a direct visual localization pipeline, yielding improvements in visual odometry (VO) accuracy through time-varying illumination conditions, as well as improved metric relocalization performance under illumination change, where conventional methods normally fail. We further provide a preliminary investigation of transfer learning from synthetic to real environments in a localization context. An open-source implementation of our method using PyTorch is available at https://github.com/utiasSTARS/cat-net.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics