🤖 AI Summary
This work addresses the challenge of parallel preconditioning for large-scale sparse graph Laplacian linear systems. We propose a GPU-accelerated randomized approximate Cholesky preconditioner construction method. Unlike conventional incomplete factorizations that rely on static sparsity patterns, our approach employs a randomized strategy to dynamically determine fill-in retention, enabling purely algebraic, low-overhead, fine-grained parallel factorization. Innovatively, we introduce a sparse dependency graph–driven dynamic task scheduler and CPU/GPU co-optimization, thereby overcoming limitations imposed by static structural assumptions. Experimental results on graph Laplacian systems demonstrate that, compared to state-of-the-art preconditioners—including algebraic multigrid (AMG) and incomplete Cholesky (IC)—our method achieves significantly faster convergence, improved end-to-end solution efficiency, and reduces preprocessing time by an order of magnitude. Moreover, it attains a GPU speedup exceeding 12×.
📝 Abstract
We introduce a parallel algorithm to construct a preconditioner for solving a large, sparse linear system where the coefficient matrix is a Laplacian matrix (a.k.a., graph Laplacian). Such a linear system arises from applications such as discretization of a partial differential equation, spectral graph partitioning, and learning problems on graphs. The preconditioner belongs to the family of incomplete factorizations and is purely algebraic. Unlike traditional incomplete factorizations, the new method employs randomization to determine whether or not to keep fill-ins, i.e., newly generated nonzero elements during Gaussian elimination. Since the sparsity pattern of the randomized factorization is unknown, computing such a factorization in parallel is extremely challenging, especially on many-core architectures such as GPUs. Our parallel algorithm dynamically computes the dependency among row/column indices of the Laplacian matrix to be factorized and processes the independent indices in parallel. Furthermore, unlike previous approaches, our method requires little pre-processing time. We implemented the parallel algorithm for multi-core CPUs and GPUs, and we compare their performance to other state-of-the-art methods.