Model-Based Active Exploration

October 29, 2018 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, bare_metal_sac.py, buffer.py, conda_env.yml, envs, imagination.py, logger.py, main.py, measures.py, models.py, normalizer.py, readme.md, sac.py, sacred_fetcher.py, tests.py, utilities.py, wrappers.py

Authors Pranav Shyam, Wojciech Jaล›kowski, Faustino Gomez arXiv ID 1810.12162 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.IT, cs.NE, stat.ML Citations 192 Venue International Conference on Machine Learning Repository https://github.com/nnaisense/max โญ 81 Last Checked 1 month ago
Abstract
Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model-Based Active eXploration (MAX), which uses an ensemble of forward models to plan to observe novel events. This is carried out by optimizing agent behaviour with respect to a measure of novelty derived from the Bayesian perspective of exploration, which is estimated using the disagreement between the futures predicted by the ensemble members. We show empirically that in semi-random discrete environments where directed exploration is critical to make progress, MAX is at least an order of magnitude more efficient than strong baselines. MAX scales to high-dimensional continuous environments where it builds task-agnostic models that can be used for any downstream task.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning