🤖 AI Summary
This work proposes a neural multigrid method to address the challenge that classical smoothers in traditional multigrid algorithms struggle to effectively suppress high-frequency errors when solving ill-conditioned linear systems arising from the discretization of integral equations. By integrating neural operators with the spectral decomposition principle of multigrid, the approach replaces conventional smoothers with a neural smoother trained offline. A hierarchical loss function and a spectral filtering mechanism are designed to enable each grid level to target specific high-frequency error components. The method requires no retraining for new right-hand sides and demonstrates superior convergence efficiency and robustness over classical solvers across varying problem sizes and regularization parameters. Furthermore, it exhibits strong generalizability to broader contexts, including partial differential equations.
📝 Abstract
Convolution-type integral equations commonly occur in signal processing and image processing. Discretizing these equations yields large and ill-conditioned linear systems. While the classic multigrid method is effective for solving linear systems derived from partial differential equations (PDE) problems, it fails to solve integral equations because its smoothers, which are implemented as conventional relaxation methods, are ineffective in reducing high-frequency components in the errors.
We propose a novel neural multigrid scheme where learned neural operators replace classical smoothers. Unlike classical smoothers, these operators are trained offline. Once trained, the neural smoothers generalize to new right-hand-side vectors without retraining, making it an efficient solver. We design level-wise loss functions incorporating spectral filtering to emulate the multigrid frequency decomposition principle, ensuring each operator focuses on solving distinct high-frequency spectral bands.
Although we focus on integral equations, the framework is generalizable to all kinds of problems, including PDE problems. Our experiments demonstrate superior efficiency over classical solvers and robust convergence across varying problem sizes and regularization weights.