Discontinuous Galerkin finite element operator network for solving non-smooth PDEs

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of learning solution operators for parametric partial differential equations (PDEs) with discontinuous coefficients, which yield non-smooth solutions that are difficult to capture accurately using conventional operator learning methods due to their reliance on large datasets and inability to resolve discontinuities. The authors propose a novel, data-free operator learning framework that integrates the symmetric interior penalty discontinuous Galerkin (SIPG) weak formulation with neural networks. In this approach, neural networks parameterize the local solution coefficients over mesh elements, and training is driven by minimizing the SIPG residual. By explicitly modeling solution singularities and discontinuities, the method accurately recovers jump structures in both one- and two-dimensional non-smooth PDE problems, demonstrating strong parametric generalization, theoretically consistent convergence rates, and robustness.

Technology Category

Application Category

📝 Abstract
We introduce Discontinuous Galerkin Finite Element Operator Network (DG--FEONet), a data-free operator learning framework that combines the strengths of the discontinuous Galerkin (DG) method with neural networks to solve parametric partial differential equations (PDEs) with discontinuous coefficients and non-smooth solutions. Unlike traditional operator learning models such as DeepONet and Fourier Neural Operator, which require large paired datasets and often struggle near sharp features, our approach minimizes the residual of a DG-based weak formulation using the Symmetric Interior Penalty Galerkin (SIPG) scheme. DG-FEONet predicts element-wise solution coefficients via a neural network, enabling data-free training without the need for precomputed input-output pairs. We provide theoretical justification through convergence analysis and validate the model's performance on a series of one- and two-dimensional PDE problems, demonstrating accurate recovery of discontinuities, strong generalization across parameter space, and reliable convergence rates. Our results highlight the potential of combining local discretization schemes with machine learning to achieve robust, singularity-aware operator approximation in challenging PDE settings.
Problem

Research questions and friction points this paper is trying to address.

non-smooth PDEs
discontinuous coefficients
operator learning
discontinuities
parametric PDEs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discontinuous Galerkin
Operator learning
Data-free training
Non-smooth PDEs
Neural networks
🔎 Similar Papers
No similar papers found.