A Learning-based Domain Decomposition Method

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficiently solving partial differential equations (PDEs) on complex geometries remains challenging for existing neural network solvers, which suffer from poor generalization, inability to handle discontinuous media, and scalability limitations. Method: We propose the first learning-based PDE solver that integrates a pre-trained neural operator into a domain decomposition framework: a physics-pretrained neural operator (PPNO) is pre-trained on simple geometries and deployed as a surrogate model for each subdomain; theoretical analysis establishes its approximation existence within PDE domain decomposition. Contribution/Results: The method achieves strong generalization to unseen microstructures and resolution invariance without retraining, seamlessly adapting to complex geometries and multiscale discontinuous media. On elliptic PDE benchmarks, it significantly outperforms state-of-the-art methods in accuracy, computational efficiency, and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
Recent developments in mechanical, aerospace, and structural engineering have driven a growing need for efficient ways to model and analyse structures at much larger and more complex scales than before. While established numerical methods like the Finite Element Method remain reliable, they often struggle with computational cost and scalability when dealing with large and geometrically intricate problems. In recent years, neural network-based methods have shown promise because of their ability to efficiently approximate nonlinear mappings. However, most existing neural approaches are still largely limited to simple domains, which makes it difficult to apply to real-world PDEs involving complex geometries. In this paper, we propose a learning-based domain decomposition method (L-DDM) that addresses this gap. Our approach uses a single, pre-trained neural operator-originally trained on simple domains-as a surrogate model within a domain decomposition scheme, allowing us to tackle large and complicated domains efficiently. We provide a general theoretical result on the existence of neural operator approximations in the context of domain decomposition solution of abstract PDEs. We then demonstrate our method by accurately approximating solutions to elliptic PDEs with discontinuous microstructures in complex geometries, using a physics-pretrained neural operator (PPNO). Our results show that this approach not only outperforms current state-of-the-art methods on these challenging problems, but also offers resolution-invariance and strong generalization to microstructural patterns unseen during training.
Problem

Research questions and friction points this paper is trying to address.

Efficient modeling of large-scale complex structures
Overcoming computational limits of traditional numerical methods
Extending neural approaches to real-world complex geometries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning-based domain decomposition method (L-DDM)
Pre-trained neural operator for complex domains
Physics-pretrained neural operator (PPNO) for PDEs
🔎 Similar Papers
No similar papers found.