Event-Triggered Model Predictive Control with Deep Reinforcement Learning for Autonomous Driving

August 22, 2022 ยท Entered Twilight ยท ๐Ÿ› IEEE Transactions on Intelligent Vehicles

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: PPO, README.md, RL_lib.py, RLeMPC_LSTM.py, RLeMPC_PER.py, RLeMPC_PER_LSTM.py, agents, deepRLeMPCsup.pdf, requirements.txt, sacd, threshold-control.py, train_a2c.py, train_ppo.py, train_sac.py, veh_env2.py

Authors Fengying Dang, Dong Chen, Jun Chen, Zhaojian Li arXiv ID 2208.10302 Category cs.RO: Robotics Cross-listed eess.SY Citations 51 Venue IEEE Transactions on Intelligent Vehicles Repository https://github.com/DangFengying/RL-based-event-triggered-MPC โญ 80 Last Checked 1 month ago
Abstract
Event-triggered model predictive control (eMPC) is a popular optimal control method with an aim to alleviate the computation and/or communication burden of MPC. However, it generally requires priori knowledge of the closed-loop system behavior along with the communication characteristics for designing the event-trigger policy. This paper attempts to solve this challenge by proposing an efficient eMPC framework and demonstrate successful implementation of this framework on the autonomous vehicle path following. First of all, a model-free reinforcement learning (RL) agent is used to learn the optimal event-trigger policy without the need for a complete dynamical system and communication knowledge in this framework. Furthermore, techniques including prioritized experience replay (PER) buffer and long-short term memory (LSTM) are employed to foster exploration and improve training efficiency. In this paper, we use the proposed framework with three deep RL algorithms, i.e., Double Q-learning (DDQN), Proximal Policy Optimization (PPO), and Soft Actor-Critic (SAC), to solve this problem. Experimental results show that all three deep RL-based eMPC (deep-RL-eMPC) can achieve better evaluation performance than the conventional threshold-based and previous linear Q-based approach in the autonomous path following. In particular, PPO-eMPC with LSTM and DDQN-eMPC with PER and LSTM obtains a superior balance between the closed-loop control performance and event-trigger frequency. The associated code is open-sourced and available at: https://github.com/DangFengying/RL-based-event-triggered-MPC.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Robotics