🤖 AI Summary
This work studies optimal policy learning for the infinite-horizon discounted linear quadratic control (LQC) problem with entropy regularization. To address this problem, we propose two novel algorithms: Regularized Policy Gradient (RPG) and Iterative Policy Optimization (IPO). Both algorithms achieve global linear convergence under exact policy evaluation. Moreover, IPO attains superlinear convergence within a local neighborhood and in transfer scenarios—from known to unknown environments—establishing the first superlinear convergence guarantee in LQC. To our knowledge, this is the first work to incorporate entropy regularization into the LQC policy learning framework, unifying policy gradient methods, iterative optimization, and classical linear control theory. Theoretical analysis and numerical experiments jointly validate the algorithms’ efficiency, robustness, and transferability.
📝 Abstract
This paper proposes and analyzes two new policy learning methods: regularized policy gradient (RPG) and iterative policy optimization (IPO), for a class of discounted linear-quadratic control (LQC) problems over an infinite time horizon with entropy regularization. Assuming access to the exact policy evaluation, both proposed approaches are proven to converge linearly in finding optimal policies of the regularized LQC. Moreover, the IPO method can achieve a super-linear convergence rate once it enters a local region around the optimal policy. Finally, when the optimal policy for an RL problem with a known environment is appropriately transferred as the initial policy to an RL problem with an unknown environment, the IPO method is shown to enable a super-linear convergence rate if the two environments are sufficiently close. Performances of these proposed algorithms are supported by numerical examples.