🤖 AI Summary
This work addresses the lack of theoretical foundations in purely data-driven approaches for smoothing and forecasting in dynamical systems. It establishes, for the first time, a universal approximation theorem tailored to such tasks, rigorously proving the existence and approximability of the underlying operator mappings learned from data. By introducing a continuous-time neural operator architecture and integrating analyses of mapping existence with investigations into operator model properties, the study constructs a comprehensive theoretical framework. Experimental validation on canonical dynamical systems—including Lorenz '63, Lorenz '96, and Kuramoto–Sivashinsky—demonstrates the efficacy of the theoretical results, thereby providing a solid theoretical foundation for data-driven modeling of dynamical systems.
📝 Abstract
Machine learning has opened new frontiers in purely data-driven algorithms for data assimilation in, and for forecasting of, dynamical systems; the resulting methods are showing some promise. However, in contrast to model-driven algorithms, analysis of these data-driven methods is poorly developed. In this paper we address this issue, developing a theory to underpin data-driven methods to solve smoothing problems arising in data assimilation and forecasting problems. The theoretical framework relies on two key components: (i) establishing the existence of the mapping to be learned; (ii) the properties of the operator learning architecture used to approximate this mapping. By studying these two components in conjunction, we establish the first universal approximation theorem for purely data-driven algorithms for both smoothing and forecasting of dynamical systems. We work in the continuous time setting, hence deploying neural operator architectures. The theoretical results are illustrated with experiments studying the Lorenz `63, Lorenz `96 and Kuramoto-Sivashinsky dynamical systems.