Using Soft Actor-Critic for Low-Level UAV Control

October 05, 2020 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Dockerfile, LICENSE, LICENSE.md, Makefile, README.md, assets, common, evaluate.py, evaluate.sh, evaluate_container.py, evaluate_container.sh, main.py, networks, notebooks, requirements.txt, saved_policies, training.sh

Authors Gabriel Moraes Barros, Esther Luna Colombini arXiv ID 2010.02293 Category cs.RO: Robotics Cross-listed cs.AI, cs.LG Citations 16 Venue arXiv.org Repository https://github.com/larocs/SAC_uav โญ 30 Last Checked 2 months ago
Abstract
Unmanned Aerial Vehicles (UAVs), or drones, have recently been used in several civil application domains from organ delivery to remote locations to wireless network coverage. These platforms, however, are naturally unstable systems for which many different control approaches have been proposed. Generally based on classic and modern control, these algorithms require knowledge of the robot's dynamics. However, recently, model-free reinforcement learning has been successfully used for controlling drones without any prior knowledge of the robot model. In this work, we present a framework to train the Soft Actor-Critic (SAC) algorithm to low-level control of a quadrotor in a go-to-target task. All experiments were conducted under simulation. With the experiments, we show that SAC can not only learn a robust policy, but it can also cope with unseen scenarios. Videos from the simulations are available in https://www.youtube.com/watch?v=9z8vGs0Ri5g and the code in https://github.com/larocs/SAC_uav.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics