Regret Analysis of Policy Optimization over Submanifolds for Linearly Constrained Online LQG

📅 2024-03-13
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the online linear-quadratic-Gaussian (LQG) control problem with time-varying and unknown cost matrices under linear structural constraints—such as sparsity—imposed on the controller. Conventional methods fail to handle constraints that restrict the controller to a stable submanifold of the parameter space. To overcome this, we propose the submanifold-oriented online optimistic Newton method (OONM), the first algorithm integrating Riemannian manifold optimization with an online prediction-correction mechanism, thereby naturally embedding linear constraints into the underlying geometric structure. We develop a novel dynamic regret analysis framework based on path length and establish a theoretical dynamic regret bound of $O(sqrt{T})$. Numerical experiments demonstrate that OONM significantly outperforms projection-based gradient methods in both convergence speed and closed-loop control performance.

Technology Category

Application Category

📝 Abstract
Recent advancement in online optimization and control has provided novel tools to study online linear quadratic regulator (LQR) problems, where cost matrices are varying adversarially over time. However, the controller parameterization of existing works may not satisfy practical conditions like sparsity due to physical connections. In this work, we study online linear quadratic Gaussian problems with a given linear constraint imposed on the controller. Inspired by the recent work of [1] which proposed, for a linearly constrained policy optimization of an offline LQR, a second order method equipped with a Riemannian metric that emerges naturally in the context of optimal control problems, we propose online optimistic Newton on manifold (OONM) which provides an online controller based on the prediction on the first and second order information of the function sequence. To quantify the proposed algorithm, we leverage the notion of regret defined as the sub-optimality of its cumulative cost to that of a (locally) minimizing controller sequence and provide the regret bound in terms of the path-length of the minimizer sequence. Simulation results are also provided to verify the property of OONM.
Problem

Research questions and friction points this paper is trying to address.

Online LQG problem with linear constraints on controllers
Regret analysis for policy optimization over submanifolds
Performance quantification using path-length of minimizer sequence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Newton on manifold algorithm
Riemannian perspective for optimization
Regret bound with path-length
🔎 Similar Papers
No similar papers found.
T
Ting-Jui Chang
Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02115, USA
Shahin Shahrampour
Shahin Shahrampour
Assistant Professor, Northeastern University
Optimization and ControlMulti-Agent SystemsMachine LearningReinforcement Learning