An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of unconstrained neural ordinary differential equations (Neural ODEs) to violate domain-specific invariants—such as physical conservation laws—in scientific simulations, leading to distorted long-term predictions. To resolve this, the authors propose an invariance compiler framework that, for the first time, treats scientific invariants as first-class constructs in Neural ODE architecture design. Leveraging large language model–driven program synthesis, the framework automatically transforms generic Neural ODE specifications into structure-preserving models whose trajectories remain strictly confined to valid manifolds. By construction, the resulting models guarantee physical consistency—subject only to numerical integration error—without requiring post-hoc regularization, and they enforce prescribed invariants exactly in continuous time. This approach significantly enhances the credibility and physical plausibility of long-term simulations and establishes a systematic, cross-domain design paradigm for invariant-aware scientific machine learning.

Technology Category

Application Category

📝 Abstract
Neural ODEs are increasingly used as continuous-time models for scientific and sensor data, but unconstrained neural ODEs can drift and violate domain invariants (e.g., conservation laws), yielding physically implausible solutions. In turn, this can compound error in long-horizon prediction and surrogate simulation. Existing solutions typically aim to enforce invariance by soft penalties or other forms of regularization, which can reduce overall error but do not guarantee that trajectories will not leave the constraint manifold. We introduce the invariant compiler, a framework that enforces invariants by construction: it treats invariants as first-class types and uses an LLM-driven compilation workflow to translate a generic neural ODE specification into a structure-preserving architecture whose trajectories remain on the admissible manifold in continuous time (and up to numerical integration error in practice). This compiler view cleanly separates what must be preserved (scientific structure) from what is learned from data (dynamics within that structure). It provides a systematic design pattern for invariant-respecting neural surrogates across scientific domains.
Problem

Research questions and friction points this paper is trying to address.

Neural ODEs
domain invariants
conservation laws
physically implausible solutions
long-horizon prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Invariant Compiler
Neural ODEs
Structure-Preserving Architecture
Scientific Simulation
LLM-Driven Compilation
🔎 Similar Papers
No similar papers found.
F
Fangzhou Yu
Virginia Tech, VA, USA
Y
Yiqi Su
Virginia Tech, VA, USA
R
Ray Lee
Virginia Tech, VA, USA
S
Shenfeng Cheng
Virginia Tech, VA, USA
Naren Ramakrishnan
Naren Ramakrishnan
Thomas L. Phillips Professor, Virginia Tech
ForecastingMachine LearningComputational epidemiologyRecommender systemsVisual analytics