π€ AI Summary
To address the poor convergence and limited accuracy of Physics-Informed Neural Networks (PINNs) arising from optimization over sparse discrete collocation points, this paper proposes a convolutional weighted loss function operating over continuous neighborhoods. Instead of conventional pointwise weighting, the method employs adaptive convolutional kernels defined over local continuous domains, reformulating loss reweighting from a primal-dual optimization perspective to enhance global solution consistency and training stability. By tightly coupling deep learning with partial differential equation (PDE) physical constraints, the framework enables joint optimization. Experiments demonstrate significantly accelerated convergence and up to 30β65% reduction in relative LΒ² error across multiple canonical PDE benchmarks. Moreover, the approach exhibits enhanced robustness to mesh sparsity and measurement noise. The core innovation lies in the first integration of convolutional structures into PINNsβ loss-weighting mechanism, enabling a paradigm shift from discrete-point-based to continuous-domain-based loss modeling.
π Abstract
Physics-informed neural networks (PINNs) are extensively employed to solve partial differential equations (PDEs) by ensuring that the outputs and gradients of deep learning models adhere to the governing equations. However, constrained by computational limitations, PINNs are typically optimized using a finite set of points, which poses significant challenges in guaranteeing their convergence and accuracy. In this study, we proposed a new weighting scheme that will adaptively change the weights to the loss functions from isolated points to their continuous neighborhood regions. The empirical results show that our weighting scheme can reduce the relative $L^2$ errors to a lower value.