Actor-Critic Model Predictive Control

📅 2023-06-16
🏛️ IEEE International Conference on Robotics and Automation
📈 Citations: 26
Influential: 0
📄 PDF
🤖 AI Summary
Integrating the task-optimization capability of model-free reinforcement learning (RL) with the robustness and online replanning ability of model predictive control (MPC) remains a key challenge in real-time robotic control. This paper proposes a differentiable MPC-embedded end-to-end Actor-Critic framework: the Actor is implemented as a differentiable MPC module, enabling synergistic short-horizon trajectory optimization and long-horizon value estimation; it is the first to unify trial-and-error learning and online replanning within an Actor-Critic architecture. The method integrates deep RL, differentiable dynamics modeling, and real-time trajectory optimization, achieving millisecond-level closed-loop control in both simulation and on a physical quadrotor platform. Experiments demonstrate substantial improvements in high-level task learning performance and out-of-distribution robustness, establishing a novel paradigm for learning-based MPC.
📝 Abstract
An open research question in robotics is how to combine the benefits of model-free reinforcement learning (RL)—known for its strong task performance and flexibility in optimizing general reward formulations—with the robustness and online replanning capabilities of model predictive control (MPC). This paper provides an answer by introducing a new framework called Actor-Critic Model Predictive Control. The key idea is to embed a differentiable MPC within an actor-critic RL framework. The proposed approach leverages the short-term predictive optimization capabilities of MPC with the exploratory and end-to-end training properties of RL. The resulting policy effectively manages both short-term decisions through the MPC-based actor and long-term prediction via the critic network, unifying the benefits of both model-based control and end-to-end learning. We validate our method in both simulation and the real world with a quadcopter platform across various high-level tasks. We show that the proposed architecture can achieve real-time control performance, learn complex behaviors via trial and error, and retain the predictive properties of the MPC to better handle out of distribution behaviour.
Problem

Research questions and friction points this paper is trying to address.

Combine model-free RL with MPC
Integrate differentiable MPC in actor-critic RL
Achieve real-time control and complex behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines MPC and RL
Differentiable MPC integration
Real-time control performance
🔎 Similar Papers
No similar papers found.
A
Angel Romero
Robotics and Perception Group, Department of Informatics, University of Zurich, and Department of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland
Yunlong Song
Yunlong Song
Genesis AI
RoboticsLearningControlVision
D
D. Scaramuzza
Robotics and Perception Group, Department of Informatics, University of Zurich, and Department of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland