๐ฎ
๐ฎ
The Ethereal
Omega-Regular Objectives in Model-Free Reinforcement Learning
September 26, 2018 ยท The Ethereal ยท ๐ International Conference on Tools and Algorithms for Construction and Analysis of Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Ernst Moritz Hahn, Mateo Perez, Sven Schewe, Fabio Somenzi, Ashutosh Trivedi, Dominik Wojtczak
arXiv ID
1810.00950
Category
cs.LO: Logic in CS
Cross-listed
cs.LG,
stat.ML
Citations
160
Venue
International Conference on Tools and Algorithms for Construction and Analysis of Systems
Last Checked
1 month ago
Abstract
We provide the first solution for model-free reinforcement learning of ฯ-regular objectives for Markov decision processes (MDPs). We present a constructive reduction from the almost-sure satisfaction of ฯ-regular objectives to an almost- sure reachability problem and extend this technique to learning how to control an unknown model so that the chance of satisfying the objective is maximized. A key feature of our technique is the compilation of ฯ-regular properties into limit- deterministic Buechi automata instead of the traditional Rabin automata; this choice sidesteps difficulties that have marred previous proposals. Our approach allows us to apply model-free, off-the-shelf reinforcement learning algorithms to compute optimal strategies from the observations of the MDP. We present an experimental evaluation of our technique on benchmark learning problems.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Logic in CS
๐ฎ
๐ฎ
The Ethereal
Safe Reinforcement Learning via Shielding
๐ฎ
๐ฎ
The Ethereal
Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
๐ฎ
๐ฎ
The Ethereal
Heterogeneous substitution systems revisited
๐ฎ
๐ฎ
The Ethereal
Weakest Precondition Reasoning for Expected Run-Times of Probabilistic Programs
๐ฎ
๐ฎ
The Ethereal