🤖 AI Summary
This paper addresses the fundamental question of finite-sample learnability for dynamic systems using only output sequences: *When is learning possible?* We introduce the novel paradigm of *dynamic learnability*, formally characterizing learnability as a finite-sample prediction problem governed by intrinsic dynamical properties—such as stability, observability, spectral radius, and Lyapunov exponents—rather than statistical assumptions (e.g., i.i.d. or stationarity). Methodologically, we integrate spectral filtering, stochastic process modeling, and stability analysis to avoid explicit system identification; for linear systems, this yields model-free, time-step-wise uniformly high-accuracy prediction. Our key contribution is establishing quantitative relationships between learnability and structural system parameters, thereby transcending classical PAC and online learning assumptions. This provides a rigorous theoretical foundation for learning non-stationary and latent-state dynamical systems.
📝 Abstract
Modern learning systems increasingly interact with data that evolve over time and depend on hidden internal state. We ask a basic question: when is such a dynamical system learnable from observations alone? This paper proposes a research program for understanding learnability in dynamical systems through the lens of next-token prediction. We argue that learnability in dynamical systems should be studied as a finite-sample question, and be based on the properties of the underlying dynamics rather than the statistical properties of the resulting sequence. To this end, we give a formulation of learnability for stochastic processes induced by dynamical systems, focusing on guarantees that hold uniformly at every time step after a finite burn-in period. This leads to a notion of dynamic learnability which captures how the structure of a system, such as stability, mixing, observability, and spectral properties, governs the number of observations required before reliable prediction becomes possible. We illustrate the framework in the case of linear dynamical systems, showing that accurate prediction can be achieved after finite observation without system identification, by leveraging improper methods based on spectral filtering. We survey the relationship between learning in dynamical systems and classical PAC, online, and universal prediction theories, and suggest directions for studying nonlinear and controlled systems.