🤖 AI Summary
Data-driven modeling of dynamical systems often suffers from poor generalization—particularly in the absence of structural priors, where extrapolation to unexplored regions of state space (e.g., unseen basins of attraction) remains challenging.
Method: We propose a multi-trajectory joint training paradigm for reservoir computing (RC), enabling, for the first time, strong generalization across distinct basins of attraction without relying on explicit dynamical assumptions or prior knowledge; training requires trajectories solely from a single basin.
Contribution/Results: Evaluated on multistable systems, our approach achieves high-accuracy prediction of dynamics in entirely unseen basins. It fundamentally extends RC’s generalization capability beyond its traditional limits—achieving prior-free, cross-basin, and data-efficient extrapolation. This establishes a new paradigm for interpretable modeling and long-term forecasting of complex nonlinear systems.
📝 Abstract
Machine learning techniques offer an effective approach to modeling dynamical systems solely from observed data. However, without explicit structural priors -- built-in assumptions about the underlying dynamics -- these techniques typically struggle to generalize to aspects of the dynamics that are poorly represented in the training data. Here, we demonstrate that reservoir computing -- a simple, efficient, and versatile machine learning framework often used for data-driven modeling of dynamical systems -- can generalize to unexplored regions of state space without explicit structural priors. First, we describe a multiple-trajectory training scheme for reservoir computers that supports training across a collection of disjoint time series, enabling effective use of available training data. Then, applying this training scheme to multistable dynamical systems, we show that RCs trained on trajectories from a single basin of attraction can achieve out-of-domain generalization by capturing system behavior in entirely unobserved basins.