Improving Input-Output Linearizing Controllers for Bipedal Robots via Reinforcement Learning

April 15, 2020 Β· Declared Dead Β· πŸ› Conference on Learning for Dynamics & Control

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Fernando CastaΓ±eda, Mathias Wulfman, Ayush Agrawal, Tyler Westenbroek, Claire J. Tomlin, S. Shankar Sastry, Koushil Sreenath arXiv ID 2004.07276 Category eess.SY: Systems & Control (EE) Cross-listed cs.LG, cs.RO Citations 5 Venue Conference on Learning for Dynamics & Control Last Checked 2 months ago
Abstract
The main drawbacks of input-output linearizing controllers are the need for precise dynamics models and not being able to account for input constraints. Model uncertainty is common in almost every robotic application and input saturation is present in every real world system. In this paper, we address both challenges for the specific case of bipedal robot control by the use of reinforcement learning techniques. Taking the structure of a standard input-output linearizing controller, we use an additive learned term that compensates for model uncertainty. Moreover, by adding constraints to the learning problem we manage to boost the performance of the final controller when input limits are present. We demonstrate the effectiveness of the designed framework for different levels of uncertainty on the five-link planar walking robot RABBIT.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Systems & Control (EE)

Died the same way β€” πŸ‘» Ghosted