Sample-Efficient Model-Free Reinforcement Learning with Off-Policy Critics
March 11, 2019 Β· Entered Twilight Β· π BNAIC/BENELEARN
"Last commit was 6.0 years ago (β₯5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: .gitignore, COPYING, README.md, avg_stats.py, bdpi.py, benchmark.py, copy_results.sh, experiments_gym.sh, gym_envs, main.py, paper.pdf, pool.py, poster.pdf, poster.png, results, task_speed.py
Authors
Denis Steckelmacher, Hélène Plisnier, Diederik M. Roijers, Ann Nowé
arXiv ID
1903.04193
Category
cs.LG: Machine Learning
Cross-listed
cs.AI
Citations
18
Venue
BNAIC/BENELEARN
Repository
https://github.com/vub-ai-lab/bdpi
β 25
Last Checked
2 months ago
Abstract
Value-based reinforcement-learning algorithms provide state-of-the-art results in model-free discrete-action settings, and tend to outperform actor-critic algorithms. We argue that actor-critic algorithms are limited by their need for an on-policy critic. We propose Bootstrapped Dual Policy Iteration (BDPI), a novel model-free reinforcement-learning algorithm for continuous states and discrete actions, with an actor and several off-policy critics. Off-policy critics are compatible with experience replay, ensuring high sample-efficiency, without the need for off-policy corrections. The actor, by slowly imitating the average greedy policy of the critics, leads to high-quality and state-specific exploration, which we compare to Thompson sampling. Because the actor and critics are fully decoupled, BDPI is remarkably stable, and unusually robust to its hyper-parameters. BDPI is significantly more sample-efficient than Bootstrapped DQN, PPO, and ACKTR, on discrete, continuous and pixel-based tasks. Source code: https://github.com/vub-ai-lab/bdpi.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Machine Learning
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
π»
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
π»
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
π»
Ghosted