Online Observer-Based Inverse Reinforcement Learning

November 03, 2020 ยท Declared Dead ยท ๐Ÿ› IEEE Control Systems Letters

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Ryan Self, Kevin Coleman, He Bai, Rushikesh Kamalapurkar arXiv ID 2011.02057 Category eess.SY: Systems & Control (EE) Cross-listed cs.LG Citations 24 Venue IEEE Control Systems Letters Last Checked 1 month ago
Abstract
In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem. Two observer-based techniques for IRL are developed, including a novel observer method that re-uses previous state estimates via history stacks. Theoretical guarantees for convergence and robustness are established under appropriate excitation conditions. Simulations demonstrate the performance of the developed observers and filters under noisy and noise-free measurements.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Systems & Control (EE)

Died the same way โ€” ๐Ÿ‘ป Ghosted