🤖 AI Summary
This work addresses the selected inversion problem for large sparse symmetric matrices. We propose a two-stage parallel algorithm based on the sTile blocking structure. Methodologically, we introduce the first tile-aware selected inversion framework tailored for structured sparse matrices (e.g., arrowhead matrices), integrating sparse Cholesky factorization, tile-level task scheduling, CPU–GPU heterogeneous collaboration (via OpenMP/MPI/CUDA), and structure-aware memory access optimization. Our key contributions are: (i) the first deep integration of block-structure awareness with heterogeneous parallelism, enabling extensibility to general sparse patterns; and (ii) substantial performance gains—up to 13× speedup over PARDISO on a dual-socket 26-core Intel Xeon server, and 5× acceleration over pure-CPU execution on an NVIDIA A100 GPU—demonstrating significant efficiency improvements for high-bandwidth, compute-intensive selected inversion scenarios.
📝 Abstract
Selected inversion is essential for applications such as Bayesian inference, electronic structure calculations, and inverse covariance estimation, where computing only specific elements of large sparse matrix inverses significantly reduces computational and memory overhead. We present an efficient implementation of a two-phase parallel algorithm for computing selected elements of the inverse of a sparse symmetric matrix A, which can be expressed as A = LL^T through sparse Cholesky factorization. Our approach leverages a tile-based structure, focusing on selected dense tiles to optimize computational efficiency and parallelism. While the focus is on arrowhead matrices, the method can be extended to handle general structured matrices. Performance evaluations on a dual-socket 26-core Intel Xeon CPU server demonstrate that sTiles outperforms state-of-the-art direct solvers such as Panua-PARDISO, achieving up to 13X speedup on large-scale structured matrices. Additionally, our GPU implementation using an NVIDIA A100 GPU demonstrates substantial acceleration over its CPU counterpart, achieving up to 5X speedup for large, high-bandwidth matrices with high computational intensity. These results underscore the robustness and versatility of sTiles, validating its effectiveness across various densities and problem configurations.