Benchmarking Deep Reinforcement Learning for Continuous Control

April 22, 2016 ยท Entered Twilight ยท ๐Ÿ› International Conference on Machine Learning

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, CHANGELOG.md, LICENSE, README.md, circle.yml, contrib, docker, docs, environment.yml, examples, rllab, sandbox, scripts, setup.py, tests, vendor

Authors Yan Duan, Xi Chen, Rein Houthooft, John Schulman, Pieter Abbeel arXiv ID 1604.06778 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.RO Citations 1.8K Venue International Conference on Machine Learning Repository https://github.com/rllab/rllab โญ 3043 Last Checked 1 month ago
Abstract
Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at https://github.com/rllab/rllab in order to facilitate experimental reproducibility and to encourage adoption by other researchers.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning