Control Disturbance Rejection in Neural ODEs

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness of neural ordinary differential equation (ODE) models under control parameter perturbations. We propose an iterative training framework based on min-max optimization, operating within an infinite-dimensional Banach control subspace. The method integrates a flat-minima-inspired constraint mechanism with projected gradient descent, progressively incorporating perturbed data while restricting parameter update directions to preserve previously learned knowledge. Crucially, we embed nonconvex–nonconcave functional optimization into the “tuning without forgetting” paradigm, enabling explicit robust modeling against parameter disturbances. Experiments demonstrate that our approach significantly improves model generalization and stability under unseen control perturbations, outperforming existing robust differential equation learning methods.

Technology Category

Application Category

📝 Abstract
In this paper, we propose an iterative training algorithm for Neural ODEs that provides models resilient to control (parameter) disturbances. The method builds on our earlier work Tuning without Forgetting-and similarly introduces training points sequentially, and updates the parameters on new data within the space of parameters that do not decrease performance on the previously learned training points-with the key difference that, inspired by the concept of flat minima, we solve a minimax problem for a non-convex non-concave functional over an infinite-dimensional control space. We develop a projected gradient descent algorithm on the space of parameters that admits the structure of an infinite-dimensional Banach subspace. We show through simulations that this formulation enables the model to effectively learn new data points and gain robustness against control disturbance.
Problem

Research questions and friction points this paper is trying to address.

Developing robust Neural ODEs resilient to control parameter disturbances
Solving minimax problem for non-convex functional over infinite-dimensional space
Enabling models to learn new data while maintaining previous performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative training algorithm for Neural ODEs
Solves minimax problem for flat minima
Uses projected gradient descent on Banach subspace
🔎 Similar Papers
No similar papers found.
E
Erkan Bayram
Electrical and Computer Engineering Department and with the Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, 61801
Mohamed-Ali Belabbas
Mohamed-Ali Belabbas
Professor
Control TheoryGeometric ControlLearningDynamical systems
Tamer Başar
Tamer Başar
Swanlund Endowed Chair Emeritus & CAS Professor Emeritus of ECE, University of Illinois
ControlCommunicationsGame TheoryNetworksOptimization