Auxiliary Tasks Speed Up Learning PointGoal Navigation

July 09, 2020 ยท Entered Twilight ยท ๐Ÿ› Conference on Robot Learning

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .circleci, .editorconfig, .github, .gitignore, .pre-commit-config.yaml, CODE_OF_CONDUCT.md, CONTRIBUTING.md, Dockerfile, LICENSE, MANIFEST.in, README.md, UNLICENSE, assets, configs, docs, environment.yml, examples, habitat, habitat_baselines, logs.ddppo.err, pyproject.toml, requirements.txt, res, scripts, setup.cfg, setup.py, test

Authors Joel Ye, Dhruv Batra, Erik Wijmans, Abhishek Das arXiv ID 2007.04561 Category cs.CV: Computer Vision Cross-listed cs.LG, cs.RO Citations 86 Venue Conference on Robot Learning Repository https://github.com/joel99/habitat-pointnav-aux โญ 19 Last Checked 1 month ago
Abstract
PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment. Wijmans et al. showed that this task is solvable but their method is computationally prohibitive, requiring 2.5 billion frames and 180 GPU-days. In this work, we develop a method to significantly increase sample and time efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory,etc.).We find that naively combining multiple auxiliary tasks improves sample efficiency,but only provides marginal gains beyond a point. To overcome this, we use attention to combine representations learnt from individual auxiliary tasks. Our best agent is 5.5x faster to reach the performance of the previous state-of-the-art, DD-PPO, at 40M frames, and improves on DD-PPO's performance at 40M frames by 0.16 SPL. Our code is publicly available at https://github.com/joel99/habitat-pointnav-aux.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision