Interpolation Neural Network-Tensor Decomposition (INN-TD): a scalable and interpretable approach for large-scale physics-based problems

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost, low accuracy, and poor interpretability of deep learning models in large-scale, high-dimensional partial differential equation (PDE) modeling, this paper proposes a sparse, interpretable neural network architecture that integrates finite element interpolation functions with tensor decomposition (CP/Tucker). The method innovatively embeds locally supported finite element basis functions into the network structure, achieving organic unification of machine learning and classical numerical methods. Tensor decomposition enables parameter-efficient sparsification, significantly reducing memory footprint and training overhead. The resulting framework delivers high accuracy, strong scalability, and physical interpretability for both forward and inverse problems involving high-dimensional parametric PDEs. It is particularly suited for industrial-grade physics-informed simulation and optimization tasks.

Technology Category

Application Category

📝 Abstract
Deep learning has been extensively employed as a powerful function approximator for modeling physics-based problems described by partial differential equations (PDEs). Despite their popularity, standard deep learning models often demand prohibitively large computational resources and yield limited accuracy when scaling to large-scale, high-dimensional physical problems. Their black-box nature further hinders the application in industrial problems where interpretability and high precision are critical. To overcome these challenges, this paper introduces Interpolation Neural Network-Tensor Decomposition (INN-TD), a scalable and interpretable framework that has the merits of both machine learning and finite element methods for modeling large-scale physical systems. By integrating locally supported interpolation functions from finite element into the network architecture, INN-TD achieves a sparse learning structure with enhanced accuracy, faster training/solving speed, and reduced memory footprint. This makes it particularly effective for tackling large-scale high-dimensional parametric PDEs in training, solving, and inverse optimization tasks in physical problems where high precision is required.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational inefficiency in large-scale physics-based problems.
Enhances accuracy and interpretability in high-dimensional PDE modeling.
Reduces memory usage and speeds up training for physical systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates interpolation functions into neural networks
Enhances accuracy and reduces memory usage
Effective for high-dimensional parametric PDEs
🔎 Similar Papers
No similar papers found.