A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots

September 09, 2019 ยท Entered Twilight ยท ๐Ÿ› Conference on Robot Learning

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, INSTALL.md, LICENSE, README.md, code, evaluate

Authors Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam arXiv ID 1909.03772 Category cs.LG: Machine Learning Cross-listed cs.AI, cs.RO, stat.ML Citations 25 Venue Conference on Robot Learning Repository https://github.com/dti-research/SenseActExperiments/ โญ 1 Last Checked 1 month ago
Abstract
As reinforcement learning (RL) achieves more success in solving complex tasks, more care is needed to ensure that RL research is reproducible and that algorithms herein can be compared easily and fairly with minimal bias. RL results are, however, notoriously hard to reproduce due to the algorithms' intrinsic variance, the environments' stochasticity, and numerous (potentially unreported) hyper-parameters. In this work we investigate the many issues leading to irreproducible research and how to manage those. We further show how to utilise a rigorous and standardised evaluation approach for easing the process of documentation, evaluation and fair comparison of different algorithms, where we emphasise the importance of choosing the right measurement metrics and conducting proper statistics on the results, for unbiased reporting of the results.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning