🤖 AI Summary
Modeling chaotic dynamical systems from noisy time-series data poses a fundamental challenge: balancing short-term prediction accuracy with long-term preservation of invariant structures. To address this, we propose Weakly Penalized Neural ODEs (WP-NODE), a novel framework that for the first time incorporates the weak-form integral residual as a regularizer, jointly optimized with the strong-form differential equation loss via end-to-end training. This dual-objective formulation simultaneously enhances numerical robustness and physical consistency. Evaluated on high-dimensional chaotic benchmarks—including Lorenz-96 and Kuramoto–Sivashinsky systems—WP-NODE significantly outperforms existing neural ODE and deep learning approaches. Under strong noise (SNR ≤ 10 dB), it maintains high short-term forecasting accuracy while faithfully recovering attractor geometry and statistical invariants (e.g., power spectra, Lyapunov exponents). These results demonstrate WP-NODE’s superior noise resilience and long-term dynamic fidelity, establishing a new state-of-the-art for physics-informed learning of chaotic systems from corrupted observations.
📝 Abstract
Accurate forecasting of complex high-dimensional dynamical systems from observational data is essential for several applications across science and engineering. A key challenge, however, is that real-world measurements are often corrupted by noise, which severely degrades the performance of data-driven models. Particularly, in chaotic dynamical systems, where small errors amplify rapidly, it is challenging to identify a data-driven model from noisy data that achieves short-term accuracy while preserving long-term invariant properties. In this paper, we propose the use of the weak formulation as a complementary approach to the classical strong formulation of data-driven time-series forecasting models. Specifically, we focus on the neural ordinary differential equation (NODE) architecture. Unlike the standard strong formulation, which relies on the discretization of the NODE followed by optimization, the weak formulation constrains the model using a set of integrated residuals over temporal subdomains. While such a formulation yields an effective NODE model, we discover that the performance of a NODE can be further enhanced by employing this weak formulation as a penalty alongside the classical strong formulation-based learning. Through numerical demonstrations, we illustrate that our proposed training strategy, which we coined as the Weak-Penalty NODE (WP-NODE), achieves state-of-the-art forecasting accuracy and exceptional robustness across benchmark chaotic dynamical systems.