🤖 AI Summary
To address the high computational cost and difficulty in balancing accuracy and efficiency arising from direct discretization of neural integral equations, this paper introduces spectral methods into the neural operator learning framework for the first time, parameterizing and learning integral operators in the frequency domain. The proposed paradigm ensures both theoretical rigor and computational efficiency: theoretically, it establishes rigorous guarantees on operator approximation capacity and numerical convergence under spectral approximation, grounded in integral equation theory and Fourier analysis; practically, it employs an optimization-driven strategy for solving second-kind integral equations, substantially reducing computational complexity while enhancing generalization and interpolation accuracy. Numerical experiments across diverse nonlinear integral equation tasks demonstrate the method’s effectiveness, stability, and superior efficiency–accuracy trade-off compared to state-of-the-art baselines.
📝 Abstract
Neural integral equations are deep learning models based on the theory of integral equations, where the model consists of an integral operator and the corresponding equation (of the second kind) which is learned through an optimization procedure. This approach allows to leverage the nonlocal properties of integral operators in machine learning, but it is computationally expensive. In this article, we introduce a framework for neural integral equations based on spectral methods that allows us to learn an operator in the spectral domain, resulting in a cheaper computational cost, as well as in high interpolation accuracy. We study the properties of our methods and show various theoretical guarantees regarding the approximation capabilities of the model, and convergence to solutions of the numerical methods. We provide numerical experiments to demonstrate the practical effectiveness of the resulting model.