Omega-Regular Objectives in Model-Free Reinforcement Learning

September 26, 2018 ยท The Ethereal ยท ๐Ÿ› International Conference on Tools and Algorithms for Construction and Analysis of Systems

๐Ÿ”ฎ THE ETHEREAL: The Ethereal
Pure theory โ€” exists on a plane beyond code

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, Dominik Wojtczak arXiv ID 1810.00950 Category cs.LO: Logic in CS Cross-listed cs.LG, stat.ML Citations 160 Venue International Conference on Tools and Algorithms for Construction and Analysis of Systems Last Checked 1 month ago
Abstract
We provide the first solution for model-free reinforcement learning of ฯ‰-regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ฯ‰-regular objectives to an almost- sure reachability problem and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. A key feature of our technique is the compilation of ฯ‰-regular properties into limit- deterministic Buechi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Logic in CS