🤖 AI Summary
Deep learning approaches for solving elliptic partial differential equations (PDEs) suffer from poor interpretability, reliance on automatic differentiation and collocation point sampling, and excessive parameter counts. Method: This paper proposes two lightweight linear convolutional neural network (CNN) methods operating within a discontinuous Galerkin (DG) discretization framework to learn numerical solutions end-to-end: (i) a supervised approach directly mapping problem coefficients to DG solutions, and (ii) an unsupervised approach embedding the DG residual as a physics-informed constraint—eliminating both labeled data and automatic differentiation. Contribution/Results: To our knowledge, this is the first work employing minimal linear CNNs for DG solution learning. The models reduce parameters by over 90% while achieving accuracy comparable to ground-truth solutions and conventional DG methods. They offer high interpretability, low computational overhead, and strong generalization across heterogeneous coefficients and domain geometries.
📝 Abstract
In recent years, there has been an increasing interest in using deep learning and neural networks to tackle scientific problems, particularly in solving partial differential equations (PDEs). However, many neural network-based methods, such as physics-informed neural networks, depend on automatic differentiation and the sampling of collocation points, which can result in a lack of interpretability and lower accuracy compared to traditional numerical methods. To address this issue, we propose two approaches for learning discontinuous Galerkin solutions to PDEs using small linear convolutional neural networks. Our first approach is supervised and depends on labeled data, while our second approach is unsupervised and does not rely on any training data. In both cases, our methods use substantially fewer parameters than similar numerics-based neural networks while also demonstrating comparable accuracy to the true and DG solutions for elliptic problems.