Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals

February 25, 2019 ยท Entered Twilight ยท ๐Ÿ› IEEE International Conference on Robotics and Automation

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, demo.gif, flappy, fwmav_sim_env.py, fwmav_sim_env_maneuver.py, setup.py, simulation.py, simulation_maneuver.py, test.py, test_simple.py, train.py, train_DDPG.py, train_maneuver_DDPG.py

Authors Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, Xinyan Deng arXiv ID 1902.09628 Category cs.RO: Robotics Cross-listed cs.AI, cs.LG Citations 34 Venue IEEE International Conference on Robotics and Automation Repository https://github.com/purdue-biorobotics/flappy โญ 244 Last Checked 1 month ago
Abstract
Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small scale man-made vehicles. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap. However, design and control of such systems remain challenging due to various constraints. Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs. For simulation validation, we recreated the hummingbird-scale robot developed in our lab in the simulation. System identification was performed to obtain the model parameters. The force generation, open-loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning. The interface of the simulation is fully compatible with OpenAI Gym environment. As a benchmark study, we present a linear controller for hovering stabilization and a Deep Reinforcement Learning control policy for goal-directed maneuvering. Finally, we demonstrate direct simulation-to-real transfer of both control policies onto the physical robot, further demonstrating the fidelity of the simulation.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics