Event-Triggered Model Predictive Control with Deep Reinforcement Learning for Autonomous Driving
August 22, 2022 ยท Entered Twilight ยท ๐ IEEE Transactions on Intelligent Vehicles
Repo contents: PPO, README.md, RL_lib.py, RLeMPC_LSTM.py, RLeMPC_PER.py, RLeMPC_PER_LSTM.py, agents, deepRLeMPCsup.pdf, requirements.txt, sacd, threshold-control.py, train_a2c.py, train_ppo.py, train_sac.py, veh_env2.py
Authors
Fengying Dang, Dong Chen, Jun Chen, Zhaojian Li
arXiv ID
2208.10302
Category
cs.RO: Robotics
Cross-listed
eess.SY
Citations
51
Venue
IEEE Transactions on Intelligent Vehicles
Repository
https://github.com/DangFengying/RL-based-event-triggered-MPC
โญ 80
Last Checked
1 month ago
Abstract
Event-triggered model predictive control (eMPC) is a popular optimal control method with an aim to alleviate the computation and/or communication burden of MPC. However, it generally requires priori knowledge of the closed-loop system behavior along with the communication characteristics for designing the event-trigger policy. This paper attempts to solve this challenge by proposing an efficient eMPC framework and demonstrate successful implementation of this framework on the autonomous vehicle path following. First of all, a model-free reinforcement learning (RL) agent is used to learn the optimal event-trigger policy without the need for a complete dynamical system and communication knowledge in this framework. Furthermore, techniques including prioritized experience replay (PER) buffer and long-short term memory (LSTM) are employed to foster exploration and improve training efficiency. In this paper, we use the proposed framework with three deep RL algorithms, i.e., Double Q-learning (DDQN), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC), to solve this problem. Experimental results show that all three deep RL-based eMPC (deep-RL-eMPC) can achieve better evaluation performance than the conventional threshold-based and previous linear Q-based approach in the autonomous path following. In particular, PPO-eMPC with LSTM and DDQN-eMPC with PER and LSTM obtains a superior balance between the closed-loop control performance and event-trigger frequency. The associated code is open-sourced and available at: https://github.com/DangFengying/RL-based-event-triggered-MPC.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Robotics
๐
๐
Old Age
R.I.P.
๐ป
Ghosted
ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras
R.I.P.
๐ป
Ghosted
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
R.I.P.
๐ป
Ghosted
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
R.I.P.
๐ป
Ghosted
Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World
R.I.P.
๐ป
Ghosted