CODE: A global approach to ODE dynamics learning

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalization and long-term prediction capability of ordinary differential equation (ODE) dynamics modeling under sparse and noisy time-series data, this paper proposes CODE: a method that employs arbitrary polynomial chaos expansion (aPCE) to globally model the ODE right-hand side via orthogonal polynomials—replacing unstructured fitting with neural networks or kernel functions. This design endows the model with strong regularization and physical consistency, significantly enhancing extrapolation to unseen initial conditions and robustness in long-horizon forecasting under data scarcity and noise corruption. CODE integrates aPCE representation, ODE-constrained learning, and optimization strategies tailored for sparse time-series observations, and provides a reproducible, robust training protocol. Experiments on the Lotka–Volterra system demonstrate that CODE consistently outperforms state-of-the-art methods—including NeuralODE and KernelODE—across diverse noise levels, broad initial-condition domains, and extended prediction horizons.

Technology Category

Application Category

📝 Abstract
Ordinary differential equations (ODEs) are a conventional way to describe the observed dynamics of physical systems. Scientists typically hypothesize about dynamical behavior, propose a mathematical model, and compare its predictions to data. However, modern computing and algorithmic advances now enable purely data-driven learning of governing dynamics directly from observations. In data-driven settings, one learns the ODE's right-hand side (RHS). Dense measurements are often assumed, yet high temporal resolution is typically both cumbersome and expensive. Consequently, one usually has only sparsely sampled data. In this work we introduce ChaosODE (CODE), a Polynomial Chaos ODE Expansion in which we use an arbitrary Polynomial Chaos Expansion (aPCE) for the ODE's right-hand side, resulting in a global orthonormal polynomial representation of dynamics. We evaluate the performance of CODE in several experiments on the Lotka-Volterra system, across varying noise levels, initial conditions, and predictions far into the future, even on previously unseen initial conditions. CODE exhibits remarkable extrapolation capabilities even when evaluated under novel initial conditions and shows advantages compared to well-examined methods using neural networks (NeuralODE) or kernel approximators (KernelODE) as the RHS representer. We observe that the high flexibility of NeuralODE and KernelODE degrades extrapolation capabilities under scarce data and measurement noise. Finally, we provide practical guidelines for robust optimization of dynamics-learning problems and illustrate them in the accompanying code.
Problem

Research questions and friction points this paper is trying to address.

Learning ODE dynamics from sparsely sampled data with noise
Improving extrapolation capabilities for unseen initial conditions
Overcoming limitations of neural networks in dynamics learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Polynomial Chaos Expansion for ODE right-hand side
Global orthonormal polynomial representation of dynamics
Robust optimization guidelines for dynamics learning
N
Nils Wildt
Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart
D
D. Tartakovsky
Department of Energy Science and Engineering, Stanford University
S
S. Oladyshkin
Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart
Wolfgang Nowak
Wolfgang Nowak
Stochastic Modelling of Hydrosystems, Universität Stuttgart
simulation sciencehydrogeologygeostatisticsData Integrationmodeling & simulation