🤖 AI Summary
Modeling parametric non-self-adjoint eigenvalue problems is challenging due to spectral instability and mode switching, which undermine the reliability of conventional single-eigenfunction representations.
Method: We propose a novel paradigm that learns stable, parameter-invariant mappings onto eigensubspaces—replacing pointwise eigenfunction modeling. Theoretically, we prove Lipschitz continuity of the invariant subspace with respect to parameters and derive a rigorous reconstruction error bound. Computationally, we integrate Fourier neural operators, geometry-adaptive Proper Orthogonal Decomposition (POD) bases, and an explicit banded cross-modal mixing mechanism to capture complex spectral dependencies on unstructured grids.
Results: Our method achieves high-accuracy prediction on parametric non-self-adjoint Steklov problems, significantly improves robustness against spectral perturbations, and—critically—enables zero-shot generalization across disparate discretization schemes, marking the first such result in this domain.
📝 Abstract
We consider operator learning for efficiently solving parametric non-selfadjoint eigenvalue problems. To overcome the spectral instability and mode switching inherent in non-selfadjoint operators, we introduce a hybrid framework that learns the stable invariant eigensubspace mapping rather than individual eigenfunctions. We proposed a Deep Eigenspace Network (DEN) architecture integrating Fourier Neural Operators, geometry-adaptive POD bases, and explicit banded cross-mode mixing mechanisms to capture complex spectral dependencies on unstructured meshes. We apply DEN to the parametric non-selfadjoint Steklov eigenvalue problem and provide theoretical proofs for the Lipschitz continuity of the eigensubspace with respect to the parameters. In addition, we derive error bounds for the reconstruction of the eigenspace. Numerical experiments validate DEN's high accuracy and zero-shot generalization capabilities across different discretizations.