🤖 AI Summary
This work addresses the dual challenges of dynamical system fidelity and cross-domain interpretability in nonlinear time-series forecasting. We propose the Universal Delay Embedding (UDE) framework, which uniquely unifies Takens’ embedding theorem with Koopman operator theory: raw time series are first mapped into Hankel matrix subspaces—treated as structured 2D inputs—and then processed by a self-attention encoder that learns dynamics-invariant representations. Coupled with deep image modeling, UDE enables end-to-end linearized prediction while preserving the underlying system’s topological and dynamical structure. Crucially, this design enhances both interpretability and generalization. Evaluated across multiple benchmarks and real-world climate datasets, UDE achieves over 20% average reduction in mean squared error and demonstrates strong fine-tuning performance. These results substantiate its viability and practicality as a universal foundation model for time-series analysis.
📝 Abstract
This study introduces Universal Delay Embedding (UDE), a pretrained foundation model designed to revolutionize time-series forecasting through principled integration of delay embedding representation and Koopman operator prediction. Leveraging Takens' embedding theorem, UDE as a dynamical representation of observed data constructs two-dimensional subspace patches from Hankel matrices, theoretically preserving dynamical and topological properties of underlying dynamical systems. Such patches are viewed as images, which can be efficiently processed by exploiting advanced deep learning technologies. Computationally, these patches further serve as tokens for learning a self-attention encoder, thus enabling accurate prediction of nonlinear time-series by a finite-dimensional Koopman operator in a linear manner in a latent space. Extensive evaluations across various benchmarks and real-world climate datasets demonstrate over 20% average reduction in mean squared error versus state-of-the-art foundation models, alongside superior generalization in fine-tuning scenarios. In particular, the learned dynamical representations and Koopman operator prediction forms from the patches exhibit exceptional interpretability, with consistent identification of topologically informative subspaces and robust encoding of domain-invariant dynamics, establishing UDE as a scalable, interpretable framework for universal time-series modeling and forecasting with broad scientific and industrial applicability.