🤖 AI Summary
Traditional online model predictive control (MPC) suffers from high computational complexity, while explicit MPC often lacks accuracy in complex dynamical systems. To address these limitations, this paper proposes an end-to-end real-time MPC framework based on an encoder-only Transformer architecture. The method employs bidirectional self-attention to model temporal control policies, enabling variable prediction horizons and single-shot full-sequence forward inference. It integrates variable-horizon sampling and a replay buffer, and directly optimizes the finite-horizon cost function via automatic differentiation—eliminating reliance on expert demonstrations. In both simulation and real-world vehicle experiments, the proposed approach significantly reduces inference latency while maintaining high solution accuracy and strong generalization across diverse prediction horizons. The results demonstrate substantial improvements in real-time control efficiency and robustness for complex, dynamic systems.
📝 Abstract
Traditional online Model Predictive Control (MPC) methods often suffer from excessive computational complexity, limiting their practical deployment. Explicit MPC mitigates online computational load by pre-computing control policies offline; however, existing explicit MPC methods typically rely on simplified system dynamics and cost functions, restricting their accuracy for complex systems. This paper proposes TransMPC, a novel Transformer-based explicit MPC algorithm capable of generating highly accurate control sequences in real-time for complex dynamic systems. Specifically, we formulate the MPC policy as an encoder-only Transformer leveraging bidirectional self-attention, enabling simultaneous inference of entire control sequences in a single forward pass. This design inherently accommodates variable prediction horizons while ensuring low inference latency. Furthermore, we introduce a direct policy optimization framework that alternates between sampling and learning phases. Unlike imitation-based approaches dependent on precomputed optimal trajectories, TransMPC directly optimizes the true finite-horizon cost via automatic differentiation. Random horizon sampling combined with a replay buffer provides independent and identically distributed (i.i.d.) training samples, ensuring robust generalization across varying states and horizon lengths. Extensive simulations and real-world vehicle control experiments validate the effectiveness of TransMPC in terms of solution accuracy, adaptability to varying horizons, and computational efficiency.