🤖 AI Summary
To address the high computational cost and poor real-time deployability of nonlinear Model Predictive Control (MPC), this paper proposes Learned Linear Quadratic Regulator (LaLQR)—the first framework to learn an implicit latent-space transformation that maps nonlinear dynamics and non-quadratic costs into linear dynamics and quadratic costs in the latent space, enabling efficient LQR-based control synthesis. LaLQR jointly optimizes the latent mapping and controller via end-to-end differentiable imitation learning, balancing accuracy, real-time performance, and generalization. The method integrates deep neural networks for representation learning, classical LQR theory, and MPC-driven supervised learning. Evaluated across diverse robotic control tasks, LaLQR achieves over 10× faster inference than standard nonlinear MPC while significantly outperforming both traditional MPC and learned neural controllers in generalization—without sacrificing control accuracy.
📝 Abstract
Model predictive control (MPC) has played a more crucial role in various robotic control tasks, but its high computational requirements are concerning, especially for nonlinear dynamical models. This paper presents a $ extbf{la}$tent $ extbf{l}$inear $ extbf{q}$uadratic $ extbf{r}$egulator (LaLQR) that maps the state space into a latent space, on which the dynamical model is linear and the cost function is quadratic, allowing the efficient application of LQR. We jointly learn this alternative system by imitating the original MPC. Experiments show LaLQR's superior efficiency and generalization compared to other baselines.