A Multiple Transferable Neural Network Method with Domain Decomposition for Elliptic Interface Problems

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses elliptic interface problems whose solutions and derivatives exhibit discontinuities across interfaces. We propose Multi-TransNet, a physics-informed neural network framework based on non-overlapping domain decomposition, integrating both strong and weak coupling formulations of interface conditions with transferable neural networks (TransNets). Key innovations include: (i) a subdomain-adaptive neuron allocation strategy; (ii) a global uniform distribution preservation mechanism; (iii) an empirical formula linking neuron shape, coverage radius, and count; and (iv) adaptive normalization of loss-weighting parameters. Comprehensive evaluation on 2D and 3D multi-interface problems—including cases with large diffusion coefficient contrasts—demonstrates that Multi-TransNet achieves superior accuracy, computational efficiency, and robustness compared to state-of-the-art methods, while substantially reducing hyperparameter tuning effort.

Technology Category

Application Category

📝 Abstract
The transferable neural network (TransNet) is a two-layer shallow neural network with pre-determined and uniformly distributed neurons in the hidden layer, and the least-squares solvers can be particularly used to compute the parameters of its output layer when applied to the solution of partial differential equations. In this paper, we integrate the TransNet technique with the nonoverlapping domain decomposition and the interface conditions to develop a novel multiple transferable neural network (Multi-TransNet) method for solving elliptic interface problems, which typically contain discontinuities in both solutions and their derivatives across interfaces. We first propose an empirical formula for the TransNet to characterize the relationship between the radius of the domain-covering ball, the number of hidden-layer neurons, and the optimal neuron shape. In the Multi-TransNet method, we assign each subdomain one distinct TransNet with an adaptively determined number of hidden-layer neurons to maintain the globally uniform neuron distribution across the entire computational domain, and then unite all the subdomain TransNets together by incorporating the interface condition terms into the loss function. The empirical formula is also extended to the Multi-TransNet and further employed to estimate appropriate neuron shapes for the subdomain TransNets, greatly reducing the parameter tuning cost. Additionally, we propose a normalization approach to adaptively select the weighting parameters for the terms in the loss function. Ablation studies and extensive experiments with comparison tests on different types of elliptic interface problems with low to high contrast diffusion coefficients in two and three dimensions are carried out to numerically demonstrate the superior accuracy, efficiency, and robustness of the proposed Multi-TransNet method.
Problem

Research questions and friction points this paper is trying to address.

Solves elliptic interface problems
Integrates neural networks with domain decomposition
Reduces parameter tuning cost efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-layer shallow neural network
Nonoverlapping domain decomposition
Adaptive neuron shape estimation
🔎 Similar Papers
No similar papers found.
T
Tianzheng Lu
School of Mathematical Sciences, Beihang University, Beijing 100191, China
Lili Ju
Lili Ju
Professor of Mathematics, University of South Carolina
Numerical PDEsScientific Machine LearningComputational GeoscienceComputer Vision
L
Liyong Zhu
School of Mathematical Sciences, Beihang University, Beijing 100191, China