Scale-Consistent Learning for Partial Differential Equations

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing PDE-informed machine learning models exhibit limited generalization, particularly across varying physical parameters such as Reynolds number and domain scale. To address this, we propose a scale-consistent learning framework that enforces consistency between full-domain solutions and analytically scaled subdomain solutions—leveraging the inherent scale invariance of PDEs. Our method integrates scale-aware data augmentation, a dedicated scale-consistency loss, and multi-scale boundary/parameter reconstruction into a neural operator architecture. A multi-scale matching loss further refines solution fidelity across scales. We evaluate the framework on Burgers, Darcy flow, Helmholtz, and Navier–Stokes equations. Trained solely at Re = 1000, the model generalizes robustly across Re = 250–10,000, achieving an average 34% error reduction over baseline methods. This demonstrates substantial improvement in cross-scale generalization—marking the first approach to explicitly embed scale consistency as a training constraint in PDE learning.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) models have emerged as a promising approach for solving partial differential equations (PDEs) in science and engineering. Previous ML models typically cannot generalize outside the training data; for example, a trained ML model for the Navier-Stokes equations only works for a fixed Reynolds number ($Re$) on a pre-defined domain. To overcome these limitations, we propose a data augmentation scheme based on scale-consistency properties of PDEs and design a scale-informed neural operator that can model a wide range of scales. Our formulation leverages the facts: (i) PDEs can be rescaled, or more concretely, a given domain can be re-scaled to unit size, and the parameters and the boundary conditions of the PDE can be appropriately adjusted to represent the original solution, and (ii) the solution operators on a given domain are consistent on the sub-domains. We leverage these facts to create a scale-consistency loss that encourages matching the solutions evaluated on a given domain and the solution obtained on its sub-domain from the rescaled PDE. Since neural operators can fit to multiple scales and resolutions, they are the natural choice for incorporating scale-consistency loss during training of neural PDE solvers. We experiment with scale-consistency loss and the scale-informed neural operator model on the Burgers' equation, Darcy Flow, Helmholtz equation, and Navier-Stokes equations. With scale-consistency, the model trained on $Re$ of 1000 can generalize to $Re$ ranging from 250 to 10000, and reduces the error by 34% on average of all datasets compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

Generalizing ML models for PDEs beyond training data scales
Solving PDEs across varied domains and parameters consistently
Enhancing neural operators with scale-consistency for multi-scale accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-consistent data augmentation for PDEs
Scale-informed neural operator design
Multi-scale generalization with consistency loss
🔎 Similar Papers
No similar papers found.