Optimal Output Feedback Learning Control for Discrete-Time Linear Quadratic Regulation

๐Ÿ“… 2025-03-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the linear quadratic regulator (LQR) control problem for unknown discrete-time linear systems, overcoming the classical limitation that dynamic output feedback relies on asymptotically convergent state observers. We propose a model-free learning control method that is observer-independent. Methodologically, we first establish an intrinsic equivalence between static output-feedback controllers and optimal state-feedback policies; second, we introduce a nonsingular parametrization matrix to achieve performance equivalence; third, we develop an off-policy adaptive dynamic programming framework integrating value iteration and policy iteration, augmented with a model-free stability criterion and a switching iteration mechanism. Theoretically, we prove closed-loop stability, algorithmic convergence, and exact LQR optimality. Numerical experiments demonstrate the methodโ€™s effectiveness and robustness against system uncertainty and initialization variations.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper studies the linear quadratic regulation (LQR) problem of unknown discrete-time systems via dynamic output feedback learning control. In contrast to the state feedback, the optimality of the dynamic output feedback control for solving the LQR problem requires an implicit condition on the convergence of the state observer. Moreover, due to unknown system matrices and the existence of observer error, it is difficult to analyze the convergence and stability of most existing output feedback learning-based control methods. To tackle these issues, we propose a generalized dynamic output feedback learning control approach with guaranteed convergence, stability, and optimality performance for solving the LQR problem of unknown discrete-time linear systems. In particular, a dynamic output feedback controller is designed to be equivalent to a state feedback controller. This equivalence relationship is an inherent property without requiring convergence of the estimated state by the state observer, which plays a key role in establishing the off-policy learning control approaches. By value iteration and policy iteration schemes, the adaptive dynamic programming based learning control approaches are developed to estimate the optimal feedback control gain. In addition, a model-free stability criterion is provided by finding a nonsingular parameterization matrix, which contributes to establishing a switched iteration scheme. Furthermore, the convergence, stability, and optimality analyses of the proposed output feedback learning control approaches are given. Finally, the theoretical results are validated by two numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Solves LQR problem for unknown discrete-time systems
Ensures convergence, stability, and optimality in output feedback
Develops model-free stability criterion and learning control approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic output feedback learning control approach
Equivalence between output and state feedback controllers
Model-free stability criterion via nonsingular parameterization
๐Ÿ”Ž Similar Papers
No similar papers found.
K
Kedi Xie
School of Automation, Beijing Institute of Technology, Beijing 100081, China; Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
M
Martin Guay
Department of Chemical Engineering, Queenโ€™s University, Kingston, ON K7L 3N6, Canada
Shimin Wang
Shimin Wang
Massachusetts Institute of Technology
Fang Deng
Fang Deng
Beijing Institute of Technology
New EnergyIntelligent Information ProcessingIntelligent Wearable System
M
Maobin Lu
School of Automation, Beijing Institute of Technology, Beijing 100081, China; Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China