Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation

September 29, 2017 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Robotics and Automation

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 8.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, CHANGELOG.md, LICENSE, README.md, circle.yml, contrib, docker, docs, environment.yml, examples, rllab, sandbox, scripts, setup.py, tests, vendor

Authors Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine arXiv ID 1709.10489 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.RO Citations 312 Venue IEEE International Conference on Robotics and Automation Repository https://github.com/gkahn13/gcg โญ 105 Last Checked 1 month ago
Abstract
Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning