From Basis to Basis: Gaussian Particle Representation for Interpretable PDE Operators

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes an interpretable neural operator framework for learning the dynamics of partial differential equations (PDEs), addressing key limitations of existing neural operators and Transformers—namely poor interpretability, difficulty in capturing local high-frequency structures, and high computational complexity. The approach leverages a Gaussian basis representation field, where Gaussian particles explicitly encode geometric information to yield a compact, mesh-independent, and directly visualizable state representation. By introducing Petrov–Galerkin projection and a novel PG Gaussian attention mechanism in modal space, the method enables efficient cross-scale coupling and computation. It achieves near-linear complexity, naturally supports irregular geometries, and extends seamlessly to both 2D and 3D settings. Experiments on standard PDE benchmarks and real-world datasets demonstrate state-of-the-art accuracy while offering intrinsic interpretability.

Technology Category

Application Category

📝 Abstract
Learning PDE dynamics for fluids increasingly relies on neural operators and Transformer-based models, yet these approaches often lack interpretability and struggle with localized, high-frequency structures while incurring quadratic cost in spatial samples. We propose representing fields with a Gaussian basis, where learned atoms carry explicit geometry (centers, anisotropic scales, weights) and form a compact, mesh-agnostic, directly visualizable state. Building on this representation, we introduce a Gaussian Particle Operator that acts in modal space: learned Gaussian modal windows perform a Petrov-Galerkin measurement, and PG Gaussian Attention enables global cross-scale coupling. This basis-to-basis design is resolution-agnostic and achieves near-linear complexity in N for a fixed modal budget, supporting irregular geometries and seamless 2D-to-3D extension. On standard PDE benchmarks and real datasets, our method attains state-of-the-art competitive accuracy while providing intrinsic interpretability.
Problem

Research questions and friction points this paper is trying to address.

interpretable PDE operators
neural operators
high-frequency structures
quadratic complexity
fluid dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian Particle Representation
Neural Operators
Petrov-Galerkin Attention
Basis-to-Basis Learning
Interpretable PDE Solvers
🔎 Similar Papers
No similar papers found.