ReBaNO: Reduced Basis Neural Operator Mitigating Generalization Gaps and Achieving Discretization Invariance

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key challenges in operator learning for multi-input partial differential equations (PDEs): poor generalization, lack of discrete invariance, and model redundancy. We propose ReBaNO—a data-sparse, adaptive operator learning framework. Methodologically, ReBaNO integrates parsimonious basis construction with generative pretraining principles, employing a rigorous greedy algorithm for offline network architecture design. It is the first operator learning method to guarantee strict discrete invariance—i.e., exact independence from discretization resolution and grid topology. Furthermore, ReBaNO incorporates knowledge distillation, task-specific activation functions, and physics-informed pretraining to substantially reduce model size and online computational cost. Experiments demonstrate that ReBaNO consistently outperforms state-of-the-art baselines—including PCA-Net, DeepONet, FNO, and CNO—on both in-distribution and out-of-distribution benchmarks. Notably, it remains the only operator learning model satisfying strict discrete invariance.

Technology Category

Application Category

📝 Abstract
We propose a novel data-lean operator learning algorithm, the Reduced Basis Neural Operator (ReBaNO), to solve a group of PDEs with multiple distinct inputs. Inspired by the Reduced Basis Method and the recently introduced Generative Pre-Trained Physics-Informed Neural Networks, ReBaNO relies on a mathematically rigorous greedy algorithm to build its network structure offline adaptively from the ground up. Knowledge distillation via task-specific activation function allows ReBaNO to have a compact architecture requiring minimal computational cost online while embedding physics. In comparison to state-of-the-art operator learning algorithms such as PCA-Net, DeepONet, FNO, and CNO, numerical results demonstrate that ReBaNO significantly outperforms them in terms of eliminating/shrinking the generalization gap for both in- and out-of-distribution tests and being the only operator learning algorithm achieving strict discretization invariance.
Problem

Research questions and friction points this paper is trying to address.

Solving PDEs with multiple distinct inputs efficiently
Mitigating generalization gaps in operator learning algorithms
Achieving strict discretization invariance in neural operators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Greedy algorithm builds adaptive offline network structure
Knowledge distillation enables compact physics-embedded architecture
Achieves strict discretization invariance with minimal computational cost
🔎 Similar Papers
No similar papers found.
H
Haolan Zheng
Department of Mathematics, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
Yanlai Chen
Yanlai Chen
Professor of Mathematics, University of Massachusetts Dartmouth
Scientific ComputingNumerical AnalysisReduced Basis MethodDiscontinuous Galerkin Finite Element MethodAdaptivity
Jiequn Han
Jiequn Han
Flatiron Institute, Simons Foundation
Applied MathematicsMachine Learning
Y
Yue Yu
Department of Mathematics, Lehigh University, Bethlehem, PA 18015, USA