🤖 AI Summary
This paper investigates the suboptimality of nominal model-predictive linear-quadratic (LQ) control for unknown linear systems, characterizing a fundamental trade-off among model mismatch, terminal cost approximation error, and prediction horizon length. We develop a novel perturbation analysis framework for the Riccati difference equation, establishing—for the first time—a quantitative relationship between horizon length and the system’s controllability index. Theoretically, we prove that a finite horizon bounded by the controllability index suffices to approximate infinite-horizon optimal performance, and that horizons of length one or infinity are often optimal. Based on this insight, we derive the first adaptive horizon-selection criterion tailored for learning-based control, yielding a tight suboptimality upper bound, an $O(log T)$ regret guarantee, and optimal sample complexity.
📝 Abstract
This work analyzes how the trade-off between the modeling error, the terminal value function error, and the prediction horizon affects the performance of a nominal receding-horizon linear quadratic (LQ) controller. By developing a novel perturbation result of the Riccati difference equation, a novel performance upper bound is obtained and suggests that for many cases, the prediction horizon can be either one or infinity to improve the control performance, depending on the relative difference between the modeling error and the terminal value function error. The result also shows that when an infinite horizon is desired, a finite prediction horizon that is larger than the controllability index can be sufficient for achieving a near-optimal performance, revealing a close relation between the prediction horizon and controllability. The obtained suboptimality performance bound is applied to provide novel sample complexity and regret guarantees for nominal receding-horizon LQ controllers in a learning-based setting. We show that an adaptive prediction horizon that increases as a logarithmic function of time is beneficial for regret minimization.