🤖 AI Summary
Traditional numerical methods for operator eigenvalue problems suffer from the curse of dimensionality, while existing deep learning approaches exhibit poor generalization and accuracy heavily dependent on spectral distribution. To address these challenges, this paper proposes a novel neural-network-based iterative optimization framework. Its core innovation is a learnable spectral transformation mechanism that jointly performs contraction projection and filtering transformations, dynamically reformulating the original problem into an equivalent yet more tractable form. This design avoids local convergence to known eigenfunctions and enhances resolution within target spectral regions. By integrating neural network approximation with differentiable spectral operations, the method enables efficient and stable iterative computation of high-dimensional operator eigenstructures. Experiments demonstrate that our approach significantly outperforms state-of-the-art learning-based methods in accuracy—achieving new best performance—while exhibiting strong generalization capability and robustness across diverse spectral configurations.
📝 Abstract
Operator eigenvalue problems play a critical role in various scientific fields and engineering applications, yet numerical methods are hindered by the curse of dimensionality. Recent deep learning methods provide an efficient approach to address this challenge by iteratively updating neural networks. These methods' performance relies heavily on the spectral distribution of the given operator: larger gaps between the operator's eigenvalues will improve precision, thus tailored spectral transformations that leverage the spectral distribution can enhance their performance. Based on this observation, we propose the Spectral Transformation Network (STNet). During each iteration, STNet uses approximate eigenvalues and eigenfunctions to perform spectral transformations on the original operator, turning it into an equivalent but easier problem. Specifically, we employ deflation projection to exclude the subspace corresponding to already solved eigenfunctions, thereby reducing the search space and avoiding converging to existing eigenfunctions. Additionally, our filter transform magnifies eigenvalues in the desired region and suppresses those outside, further improving performance. Extensive experiments demonstrate that STNet consistently outperforms existing learning-based methods, achieving state-of-the-art performance in accuracy.