Sparse FEONet: A Low-Cost, Memory-Efficient Operator Network via Finite-Element Local Sparsity for Parametric PDEs

📅 2026-01-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of escalating computational cost and degraded accuracy in FEONet when applied to large-scale problems due to the increasing number of elements. Inspired by the local sparsity inherent in finite element methods, the authors propose a sparse neural network architecture that establishes a direct mapping from parameters to solutions, enabling efficient solution of parametric partial differential equations without requiring training data. By incorporating locally sparse connectivity and a data-free training mechanism, the method substantially reduces both computational and memory overhead while offering theoretical guarantees on approximation accuracy and stability. Numerical experiments demonstrate that the proposed approach achieves computational efficiency significantly higher than that of the original FEONet, while maintaining comparable high accuracy and robustness.

Technology Category

Application Category

📝 Abstract
In this paper, we study the finite element operator network (FEONet), an operator-learning method for parametric problems, originally introduced in J. Y. Lee, S. Ko, and Y. Hong, Finite Element Operator Network for Solving Elliptic-Type Parametric PDEs, SIAM J. Sci. Comput., 47(2), C501-C528, 2025. FEONet realizes the parameter-to-solution map on a finite element space and admits a training procedure that does not require training data, while exhibiting high accuracy and robustness across a broad class of problems. However, its computational cost increases and accuracy may deteriorate as the number of elements grows, posing notable challenges for large-scale problems. In this paper, we propose a new sparse network architecture motivated by the structure of the finite elements to address this issue. Throughout extensive numerical experiments, we show that the proposed sparse network achieves substantial improvements in computational cost and efficiency while maintaining comparable accuracy. We also establish theoretical results demonstrating that the sparse architecture can approximate the target operator effectively and provide a stability analysis ensuring reliable training and prediction.
Problem

Research questions and friction points this paper is trying to address.

parametric PDEs
finite element method
operator learning
computational cost
scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse FEONet
finite element sparsity
operator learning
parametric PDEs
memory-efficient architecture
🔎 Similar Papers
No similar papers found.