Identification and Adaptive Control of Markov Jump Systems: Sample Complexity and Regret Bounds

📅 2021-11-13
🏛️ arXiv.org
📈 Citations: 22
Influential: 2
📄 PDF
🤖 AI Summary
This paper addresses online adaptive control of unknown Markov jump linear systems (MJS) with quadratic cost minimization. To tackle the challenge of jointly identifying both mode-dependent system dynamics and the mode transition matrix under time-varying dynamics, we establish, for the first time, a joint identification theory from a single trajectory, achieving a sample complexity of $O(1/sqrt{T})$. We propose a deterministic-equivalent adaptive control framework integrated with mixed-time-scale analysis, which dispenses with conventional stability assumptions and attains an $O(sqrt{T})$ regret bound; under partial prior knowledge, this bound is further improved to $mathrm{polylog}(T)$. The theoretical results are validated via numerical experiments, offering a novel paradigm for robust adaptive control of nonstationary dynamical systems.
📝 Abstract
Learning how to effectively control unknown dynamical systems is crucial for intelligent autonomous systems. This task becomes a significant challenge when the underlying dynamics are changing with time. Motivated by this challenge, this paper considers the problem of controlling an unknown Markov jump linear system (MJS) to optimize a quadratic objective. By taking a model-based perspective, we consider identification-based adaptive control for MJSs. We first provide a system identification algorithm for MJS to learn the dynamics in each mode as well as the Markov transition matrix, underlying the evolution of the mode switches, from a single trajectory of the system states, inputs, and modes. Through mixing-time arguments, sample complexity of this algorithm is shown to be $mathcal{O}(1/sqrt{T})$. We then propose an adaptive control scheme that performs system identification together with certainty equivalent control to adapt the controllers in an episodic fashion. Combining our sample complexity results with recent perturbation results for certainty equivalent control, we prove that when the episode lengths are appropriately chosen, the proposed adaptive control scheme achieves $mathcal{O}(sqrt{T})$ regret, which can be improved to $mathcal{O}(polylog(T))$ with partial knowledge of the system. Our proof strategy introduces innovations to handle Markovian jumps and a weaker notion of stability common in MJSs. Our analysis provides insights into system theoretic quantities that affect learning accuracy and control performance. Numerical simulations are presented to further reinforce these insights.
Problem

Research questions and friction points this paper is trying to address.

Controls unknown Markov jump linear systems to optimize quadratic objectives
Identifies system dynamics and transition matrices from single trajectories
Achieves sublinear regret bounds through adaptive episodic control schemes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based identification of Markov jump system dynamics
Episodic adaptive control using certainty equivalent method
Martingale-based analysis for sample complexity and regret bounds
🔎 Similar Papers
No similar papers found.