A Physics-informed Multi-resolution Neural Operator

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Operator learning on multi-resolution irregular meshes faces challenges of scarce high-quality labeled data and inconsistent sample resolutions. Method: This paper proposes a physics-informed, data-free neural operator framework. It extends the RINO architecture to a fully unsupervised setting: pre-trained basis functions encode arbitrarily discretized inputs into a latent space; an MLP models the operator mapping; and PDE constraints—formulated via finite differences—are embedded directly into the loss function as hard physical priors. Contribution/Results: The framework requires no input-output paired data. It significantly improves generalization and robustness across mixed coarse–fine meshes. In multiple multi-resolution numerical experiments—including Poisson, Darcy flow, and Navier–Stokes problems—the model stably predicts physical fields with accuracy comparable to traditional PDE solvers, while drastically reducing dependence on high-fidelity labeled datasets.

Technology Category

Application Category

📝 Abstract
The predictive accuracy of operator learning frameworks depends on the quality and quantity of available training data (input-output function pairs), often requiring substantial amounts of high-fidelity data, which can be challenging to obtain in some real-world engineering applications. These datasets may be unevenly discretized from one realization to another, with the grid resolution varying across samples. In this study, we introduce a physics-informed operator learning approach by extending the Resolution Independent Neural Operator (RINO) framework to a fully data-free setup, addressing both challenges simultaneously. Here, the arbitrarily (but sufficiently finely) discretized input functions are projected onto a latent embedding space (i.e., a vector space of finite dimensions), using pre-trained basis functions. The operator associated with the underlying partial differential equations (PDEs) is then approximated by a simple multi-layer perceptron (MLP), which takes as input a latent code along with spatiotemporal coordinates to produce the solution in the physical space. The PDEs are enforced via a finite difference solver in the physical space. The validation and performance of the proposed method are benchmarked on several numerical examples with multi-resolution data, where input functions are sampled at varying resolutions, including both coarse and fine discretizations.
Problem

Research questions and friction points this paper is trying to address.

Addresses operator learning with limited high-fidelity data
Handles uneven and multi-resolution discretized training datasets
Develops physics-informed neural operator for PDE solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed operator learning without training data
Latent embedding projection for arbitrary discretizations
Multi-layer perceptron approximates PDE operator
🔎 Similar Papers
No similar papers found.
S
Sumanta Roy
Dept. of Civil and Systems Engineering, Johns Hopkins University, Baltimore, MD, USA
B
Bahador Bahmani
Depart. of Mechanical Engineering, Northwestern University, Evanston, Illinois
I
Ioannis G. Kevrekidis
Dept. of Chemical and Biomolecular Engineering, Dept. of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA
Michael D. Shields
Michael D. Shields
Johns Hopkins University
uncertainty quantificationcomputational mechanicsstochastic processes