Fast Policy Learning for Linear Quadratic Control with Entropy Regularization

📅 2023-11-23
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work studies optimal policy learning for the infinite-horizon discounted linear quadratic control (LQC) problem with entropy regularization. To address this problem, we propose two novel algorithms: Regularized Policy Gradient (RPG) and Iterative Policy Optimization (IPO). Both algorithms achieve global linear convergence under exact policy evaluation. Moreover, IPO attains superlinear convergence within a local neighborhood and in transfer scenarios—from known to unknown environments—establishing the first superlinear convergence guarantee in LQC. To our knowledge, this is the first work to incorporate entropy regularization into the LQC policy learning framework, unifying policy gradient methods, iterative optimization, and classical linear control theory. Theoretical analysis and numerical experiments jointly validate the algorithms’ efficiency, robustness, and transferability.
📝 Abstract
This paper proposes and analyzes two new policy learning methods: regularized policy gradient (RPG) and iterative policy optimization (IPO), for a class of discounted linear-quadratic control (LQC) problems over an infinite time horizon with entropy regularization. Assuming access to the exact policy evaluation, both proposed approaches are proven to converge linearly in finding optimal policies of the regularized LQC. Moreover, the IPO method can achieve a super-linear convergence rate once it enters a local region around the optimal policy. Finally, when the optimal policy for an RL problem with a known environment is appropriately transferred as the initial policy to an RL problem with an unknown environment, the IPO method is shown to enable a super-linear convergence rate if the two environments are sufficiently close. Performances of these proposed algorithms are supported by numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Developing fast policy learning methods for entropy-regularized linear quadratic control
Proving linear and super-linear convergence rates for optimal policy discovery
Enhancing policy transfer between known and unknown environment settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularized policy gradient for entropy-controlled LQC
Iterative policy optimization with super-linear convergence
Policy transfer between known and unknown environments
🔎 Similar Papers
No similar papers found.
X
Xin Guo
Department of Industrial Engineering & Operations Research, University of California, Berkeley, Berkeley, CA 94720, USA
X
Xinyu Li
Department of Industrial Engineering & Operations Research, University of California, Berkeley, Berkeley, CA 94720, USA
Renyuan Xu
Renyuan Xu
Stanford University
Mathematical FinanceStochastic AnalysisGenerative AIReinforcement LearningGame Theory