CFO: Learning Continuous-Time PDE Dynamics via Flow-Matched Neural Operators

๐Ÿ“… 2025-12-04
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional autoregressive neural operators for time-dependent PDEs suffer from accumulated long-horizon errors and reliance on uniform temporal discretization. To address this, we propose the Continuous Flow Operator (CFO) frameworkโ€”the first to incorporate flow matching for learning the PDE right-hand side, enabling direct modeling of continuous-time dynamics without backpropagation through ODE solvers. CFO synergistically integrates neural operators with trajectory spline fitting to estimate instantaneous time derivatives and construct a probability path approximating the true evolution, thereby training a robust velocity field. The method supports arbitrary-time queries, backward-in-time inference, and temporal resolution invariance. Evaluated on four benchmarks, CFO achieves up to 87% lower prediction error using only 25% irregularly sampled data, improves inference efficiency by 50%, and significantly enhances long-term forecasting accuracy and robustness.

Technology Category

Application Category

๐Ÿ“ Abstract
Neural operator surrogates for time-dependent partial differential equations (PDEs) conventionally employ autoregressive prediction schemes, which accumulate error over long rollouts and require uniform temporal discretization. We introduce the Continuous Flow Operator (CFO), a framework that learns continuous-time PDE dynamics without the computational burden of standard continuous approaches, e.g., neural ODE. The key insight is repurposing flow matching to directly learn the right-hand side of PDEs without backpropagating through ODE solvers. CFO fits temporal splines to trajectory data, using finite-difference estimates of time derivatives at knots to construct probability paths whose velocities closely approximate the true PDE dynamics. A neural operator is then trained via flow matching to predict these analytic velocity fields. This approach is inherently time-resolution invariant: training accepts trajectories sampled on arbitrary, non-uniform time grids while inference queries solutions at any temporal resolution through ODE integration. Across four benchmarks (Lorenz, 1D Burgers, 2D diffusion-reaction, 2D shallow water), CFO demonstrates superior long-horizon stability and remarkable data efficiency. CFO trained on only 25% of irregularly subsampled time points outperforms autoregressive baselines trained on complete data, with relative error reductions up to 87%. Despite requiring numerical integration at inference, CFO achieves competitive efficiency, outperforming autoregressive baselines using only 50% of their function evaluations, while uniquely enabling reverse-time inference and arbitrary temporal querying.
Problem

Research questions and friction points this paper is trying to address.

Learning continuous-time PDE dynamics without autoregressive error accumulation
Achieving time-resolution invariance for arbitrary non-uniform temporal sampling
Enhancing long-horizon stability and data efficiency in PDE surrogate modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses flow matching to learn PDE dynamics directly
Fits temporal splines to handle arbitrary time grids
Enables resolution-invariant training and inference via ODE integration
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xianglong Hou
Graduate Group in Applied Mathematics and Computational Science, University of Pennsylvania
X
Xinquan Huang
Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania
Paris Perdikaris
Paris Perdikaris
University of Pennsylvania
Machine learningAI for ScienceComputational Science and EngineeringUncertainty Quantification