Fluids You Can Trust: Property-Preserving Operator Learning for Incompressible Flows

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a physics-preserving operator learning framework based on kernel-based expansions to address the high computational cost of traditional numerical methods for incompressible flow and the inability of existing neural operators to simultaneously enforce incompressibility, periodicity, and turbulent dynamics. By mapping input functions to expansion coefficients defined on bases that inherently satisfy multiple physical constraints, the method enables analytically consistent modeling of velocity fields. For the first time, it rigorously and jointly preserves the essential physical properties of incompressible flows at the operator level. In both 2D and 3D laminar and turbulent regimes, the approach reduces generalization relative ℓ² error by up to six orders of magnitude and accelerates training by five orders of magnitude compared to state-of-the-art neural operators, achieving an unprecedented balance among accuracy, efficiency, and physical fidelity.

Technology Category

Application Category

📝 Abstract
We present a novel property-preserving kernel-based operator learning method for incompressible flows governed by the incompressible Navier-Stokes equations. Traditional numerical solvers incur significant computational costs to respect incompressibility. Operator learning offers efficient surrogate models, but current neural operators fail to exactly enforce physical properties such as incompressibility, periodicity, and turbulence. Our method maps input functions to expansion coefficients of output functions in a property-preserving kernel basis, ensuring that predicted velocity fields analytically and simultaneously preserve the aforementioned physical properties. We evaluate the method on challenging 2D and 3D, laminar and turbulent, incompressible flow problems. Our method achieves up to six orders of magnitude lower relative $\ell_2$ errors upon generalization and trains up to five orders of magnitude faster compared to neural operators. Moreover, while our method enforces incompressibility analytically, neural operators exhibit very large deviations. Our results show that our method provides an accurate and efficient surrogate for incompressible flows.
Problem

Research questions and friction points this paper is trying to address.

incompressible flows
property-preserving
neural operators
Navier-Stokes equations
physical constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

property-preserving
operator learning
incompressible flows
kernel-based method
Navier-Stokes equations
🔎 Similar Papers
No similar papers found.