🤖 AI Summary
This work addresses the insufficient robustness of neural ordinary differential equation (ODE) models under control parameter perturbations. We propose an iterative training framework based on min-max optimization, operating within an infinite-dimensional Banach control subspace. The method integrates a flat-minima-inspired constraint mechanism with projected gradient descent, progressively incorporating perturbed data while restricting parameter update directions to preserve previously learned knowledge. Crucially, we embed nonconvex–nonconcave functional optimization into the “tuning without forgetting” paradigm, enabling explicit robust modeling against parameter disturbances. Experiments demonstrate that our approach significantly improves model generalization and stability under unseen control perturbations, outperforming existing robust differential equation learning methods.
📝 Abstract
In this paper, we propose an iterative training algorithm for Neural ODEs that provides models resilient to control (parameter) disturbances. The method builds on our earlier work Tuning without Forgetting-and similarly introduces training points sequentially, and updates the parameters on new data within the space of parameters that do not decrease performance on the previously learned training points-with the key difference that, inspired by the concept of flat minima, we solve a minimax problem for a non-convex non-concave functional over an infinite-dimensional control space. We develop a projected gradient descent algorithm on the space of parameters that admits the structure of an infinite-dimensional Banach subspace. We show through simulations that this formulation enables the model to effectively learn new data points and gain robustness against control disturbance.