🤖 AI Summary
This work addresses the challenge of modeling marginally stable unknown nonlinear dynamical systems. Methodologically, it introduces a spectral filtering–based general learning framework—the first to extend spectral filtering to asymmetric, noisy, nonlinear systems exhibiting critically stable modes. The approach integrates control-theoretic learnability measures, online convex optimization, and spectral analysis of dynamical systems to design a novel noise-robust filtering algorithm. Theoretically, it establishes asymptotic convergence of prediction error to zero, with convergence rate precisely characterized by the proposed learnability metric. Experiments demonstrate that the framework significantly improves modeling accuracy and generalization performance on complex marginally stable nonlinear systems. By unifying data-driven identification and stability-aware learning, it provides a rigorous, analyzable paradigm for model-based control design.
📝 Abstract
We study the fundamental problem of learning a marginally stable unknown nonlinear dynamical system. We describe an algorithm for this problem, based on the technique of spectral filtering, which learns a mapping from past observations to the next based on a spectral representation of the system. Using techniques from online convex optimization, we prove vanishing prediction error for any nonlinear dynamical system that has finitely many marginally stable modes, with rates governed by a novel quantitative control-theoretic notion of learnability. The main technical component of our method is a new spectral filtering algorithm for linear dynamical systems, which incorporates past observations and applies to general noisy and marginally stable systems. This significantly generalizes the original spectral filtering algorithm to both asymmetric dynamics as well as incorporating noise correction, and is of independent interest.