🤖 AI Summary
This paper addresses the challenge of deeply integrating model predictive control (MPC) and reinforcement learning (RL), stemming from their fundamentally divergent model usage paradigms. To resolve this, we propose the first unified taxonomy for MPC–RL fusion, centered on *how models are used*, categorizing approaches into three paradigms: MPC-augmented RL, RL-augmented MPC, and co-designed architectures. Leveraging a unified Actor–Critic modeling framework, we systematically analyze how MPC’s online optimization enhances RL’s closed-loop performance and establish a performance-gain-oriented evaluation perspective grounded in closed-loop metrics. The survey comprehensively covers six application domains—including robotics, energy systems, and autonomous driving—and synthesizes cross-cutting modeling techniques bridging control theory and RL. Our work provides a scalable methodology and principled design guidelines for hybrid intelligent control systems.
📝 Abstract
The fields of MPC and RL consider two successful control techniques for Markov decision processes. Both approaches are derived from similar fundamental principles, and both are widely used in practical applications, including robotics, process control, energy systems, and autonomous driving. Despite their similarities, MPC and RL follow distinct paradigms that emerged from diverse communities and different requirements. Various technical discrepancies, particularly the role of an environment model as part of the algorithm, lead to methodologies with nearly complementary advantages. Due to their orthogonal benefits, research interest in combination methods has recently increased significantly, leading to a large and growing set of complex ideas leveraging MPC and RL. This work illuminates the differences, similarities, and fundamentals that allow for different combination algorithms and categorizes existing work accordingly. Particularly, we focus on the versatile actor-critic RL approach as a basis for our categorization and examine how the online optimization approach of MPC can be used to improve the overall closed-loop performance of a policy.