🤖 AI Summary
Ill-conditioned linear systems frequently cause divergence of Krylov subspace iterative methods (e.g., CG, GMRES), severely limiting their practicality in large-scale scientific computing. To address this, we propose a general, scalable numerical stabilization framework that systematically enhances robustness against ill-conditioning by embedding condition-number-aware preconditioning and residual regularization directly into the Krylov iteration process. The framework preserves the original Krylov structure, is method-agnostic, and has been integrated into the SciPy solver library. Extensive experiments on synthetic benchmarks and real-world high-dimensional ill-conditioned systems—including discretized partial differential equations and inverse problems—demonstrate that our approach achieves stable convergence across all test cases, consistently outperforms the baseline algorithms in convergence rate, and incurs only controlled increases in memory footprint and computational cost. This work provides a broadly applicable, reliable solution for solving large-scale ill-conditioned linear systems.
📝 Abstract
Iterative solvers for large-scale linear systems such as Krylov subspace methods can diverge when the linear system is ill-conditioned, thus significantly reducing the applicability of these iterative methods in practice for high-performance computing solutions of such large-scale linear systems. To address this fundamental problem, we propose general algorithmic frameworks to modify Krylov subspace iterative solution methods which ensure that the algorithms are stable and do not diverge. We then apply our general frameworks to current implementations of the corresponding iterative methods in SciPy and demonstrate the efficacy of our stable iterative approach with respect to numerical experiments across a wide range of synthetic and real-world ill-conditioned linear systems.