๐ค AI Summary
This paper addresses the linear quadratic regulator (LQR) control problem for unknown discrete-time linear systems, overcoming the classical limitation that dynamic output feedback relies on asymptotically convergent state observers. We propose a model-free learning control method that is observer-independent. Methodologically, we first establish an intrinsic equivalence between static output-feedback controllers and optimal state-feedback policies; second, we introduce a nonsingular parametrization matrix to achieve performance equivalence; third, we develop an off-policy adaptive dynamic programming framework integrating value iteration and policy iteration, augmented with a model-free stability criterion and a switching iteration mechanism. Theoretically, we prove closed-loop stability, algorithmic convergence, and exact LQR optimality. Numerical experiments demonstrate the methodโs effectiveness and robustness against system uncertainty and initialization variations.
๐ Abstract
This paper studies the linear quadratic regulation (LQR) problem of unknown discrete-time systems via dynamic output feedback learning control. In contrast to the state feedback, the optimality of the dynamic output feedback control for solving the LQR problem requires an implicit condition on the convergence of the state observer. Moreover, due to unknown system matrices and the existence of observer error, it is difficult to analyze the convergence and stability of most existing output feedback learning-based control methods. To tackle these issues, we propose a generalized dynamic output feedback learning control approach with guaranteed convergence, stability, and optimality performance for solving the LQR problem of unknown discrete-time linear systems. In particular, a dynamic output feedback controller is designed to be equivalent to a state feedback controller. This equivalence relationship is an inherent property without requiring convergence of the estimated state by the state observer, which plays a key role in establishing the off-policy learning control approaches. By value iteration and policy iteration schemes, the adaptive dynamic programming based learning control approaches are developed to estimate the optimal feedback control gain. In addition, a model-free stability criterion is provided by finding a nonsingular parameterization matrix, which contributes to establishing a switched iteration scheme. Furthermore, the convergence, stability, and optimality analyses of the proposed output feedback learning control approaches are given. Finally, the theoretical results are validated by two numerical examples.