Sample-Efficient Model-Free Reinforcement Learning with Off-Policy Critics

March 11, 2019 Β· Entered Twilight Β· πŸ› BNAIC/BENELEARN

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 6.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, COPYING, README.md, avg_stats.py, bdpi.py, benchmark.py, copy_results.sh, experiments_gym.sh, gym_envs, main.py, paper.pdf, pool.py, poster.pdf, poster.png, results, task_speed.py

Authors Denis Steckelmacher, Hélène Plisnier, Diederik M. Roijers, Ann Nowé arXiv ID 1903.04193 Category cs.LG: Machine Learning Cross-listed cs.AI Citations 18 Venue BNAIC/BENELEARN Repository https://github.com/vub-ai-lab/bdpi ⭐ 25 Last Checked 2 months ago
Abstract
Value-based reinforcement-learning algorithms provide state-of-the-art results in model-free discrete-action settings, and tend to outperform actor-critic algorithms. We argue that actor-critic algorithms are limited by their need for an on-policy critic. We propose Bootstrapped Dual Policy Iteration (BDPI), a novel model-free reinforcement-learning algorithm for continuous states and discrete actions, with an actor and several off-policy critics. Off-policy critics are compatible with experience replay, ensuring high sample-efficiency, without the need for off-policy corrections. The actor, by slowly imitating the average greedy policy of the critics, leads to high-quality and state-specific exploration, which we compare to Thompson sampling. Because the actor and critics are fully decoupled, BDPI is remarkably stable, and unusually robust to its hyper-parameters. BDPI is significantly more sample-efficient than Bootstrapped DQN, PPO, and ACKTR, on discrete, continuous and pixel-based tasks. Source code: https://github.com/vub-ai-lab/bdpi.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning