🤖 AI Summary
This paper addresses adaptive control of discrete-time nonlinear stochastic systems subject to linearly parameterized uncertainties. Methodologically, it proposes a novel adaptive control framework based on certainty equivalence learning, integrating a family of parameterized feedback controllers with online learning. The closed-loop system is constructed over an information-rich region in the state space, and probabilistic stability analysis establishes rigorous, state-dependent stability bounds. The key contribution is the first synergistic integration of the certainty equivalence principle with parameterized controller design for nonlinear discrete-time stochastic systems—achieving almost sure stability within the information region. Moreover, under persistent excitation over the entire state space, high-probability global stability is guaranteed. The theoretical results provide verifiable stability conditions and explicit convergence rate bounds, substantially extending the applicability of classical adaptive control to stochastic nonlinear settings.
📝 Abstract
We consider the adaptive control problem for discrete-time, nonlinear stochastic systems with linearly parameterised uncertainty. Assuming access to a parameterised family of controllers that can stabilise the system in a bounded set within an informative region of the state space when the parameter is well-chosen, we propose a certainty equivalence learning-based adaptive control strategy, and subsequently derive stability bounds on the closed-loop system that hold for some probabilities. We then show that if the entire state space is informative, and the family of controllers is globally stabilising with appropriately chosen parameters, high probability stability guarantees can be derived.