Robust Loop Closure by Textual Cues in Challenging Environments

October 21, 2024 ยท Declared Dead ยท ๐Ÿ› IEEE Robotics and Automation Letters

๐Ÿ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Tongxing Jin, Thien-Minh Nguyen, Xinhang Xu, Yizhuo Yang, Shenghai Yuan, Jianping Li, Lihua Xie arXiv ID 2410.15869 Category cs.RO: Robotics Cross-listed eess.SY Citations 11 Venue IEEE Robotics and Automation Letters Repository https://github.com/TongxingJin/TXTLCD} Last Checked 1 month ago
Abstract
Loop closure is an important task in robot navigation. However, existing methods mostly rely on some implicit or heuristic features of the environment, which can still fail to work in common environments such as corridors, tunnels, and warehouses. Indeed, navigating in such featureless, degenerative, and repetitive (FDR) environments would also pose a significant challenge even for humans, but explicit text cues in the surroundings often provide the best assistance. This inspires us to propose a multi-modal loop closure method based on explicit human-readable textual cues in FDR environments. Specifically, our approach first extracts scene text entities based on Optical Character Recognition (OCR), then creates a local map of text cues based on accurate LiDAR odometry and finally identifies loop closure events by a graph-theoretic scheme. Experiment results demonstrate that this approach has superior performance over existing methods that rely solely on visual and LiDAR sensors. To benefit the community, we release the source code and datasets at \url{https://github.com/TongxingJin/TXTLCD}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics

Died the same way โ€” ๐Ÿ’€ 404 Not Found