Deconfounding Reinforcement Learning in Observational Settings

December 26, 2018 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 6.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, ac_decon, model_decon_uBernoulli, model_decon_uGaussian

Authors Chaochao Lu, Bernhard Schâlkopf, José Miguel HernÑndez-Lobato arXiv ID 1812.10576 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 76 Venue arXiv.org Repository https://github.com/CausalRL/DRL ⭐ 52 Last Checked 1 month ago
Abstract
We propose a general formulation for addressing reinforcement learning (RL) problems in settings with observational data. That is, we consider the problem of learning good policies solely from historical data in which unobserved factors (confounders) affect both observed actions and rewards. Our formulation allows us to extend a representative RL algorithm, the Actor-Critic method, to its deconfounding variant, with the methodology for this extension being easily applied to other RL algorithms. In addition to this, we develop a new benchmark for evaluating deconfounding RL algorithms by modifying the OpenAI Gym environments and the MNIST dataset. Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data. To the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full RL problems with observational data. Code is available at https://github.com/CausalRL/DRL.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning