🤖 AI Summary
To address the degradation of generalization performance in dynamical system modeling caused by environmental distribution shifts—where environment labels are often unavailable in practice—existing methods rely on labeled environmental annotations. This work proposes DynaInfer, the first stable learning framework for dynamical systems that operates without environment labels. DynaInfer implicitly infers environmental membership by analyzing prediction errors of a fixed neural network across data points, then formulates an alternating optimization objective to jointly solve unsupervised environment assignment and model parameter estimation. We provide theoretical convergence guarantees. Extensive experiments on canonical dynamical systems—including Lorenz, Van der Pol, and Neural ODEs—demonstrate that DynaInfer rapidly and accurately recovers ground-truth environmental structures, achieving significantly superior out-of-distribution generalization compared to state-of-the-art methods. Notably, it retains its advantage even when environment labels are available.
📝 Abstract
Data-driven methods offer efficient and robust solutions for analyzing complex dynamical systems but rely on the assumption of I.I.D. data, driving the development of generalization techniques for handling environmental differences. These techniques, however, are limited by their dependence on environment labels, which are often unavailable during training due to data acquisition challenges, privacy concerns, and environmental variability, particularly in large public datasets and privacy-sensitive domains. In response, we propose DynaInfer, a novel method that infers environment specifications by analyzing prediction errors from fixed neural networks within each training round, enabling environment assignments directly from data. We prove our algorithm effectively solves the alternating optimization problem in unlabeled scenarios and validate it through extensive experiments across diverse dynamical systems. Results show that DynaInfer outperforms existing environment assignment techniques, converges rapidly to true labels, and even achieves superior performance when environment labels are available.