🤖 AI Summary
In sparse LU factorization, symbolic analysis yields nonzero patterns concentrated along the diagonal and bottom-right region, causing severe load imbalance under regular 2D blocking; moreover, existing matrix features inadequately support adaptive blocking. To address this, we propose a structure-aware irregular blocking method: we introduce a novel local nonzero density metric based on diagonal blocks, and integrate fine-grained and coarse-grained blocking strategies to dynamically adapt to both dense and sparse subregions. Furthermore, we model task dependencies via a dependency tree and optimize GPU parallelism to achieve balanced workloads across hierarchical levels and within each level. On a single NVIDIA A100 GPU, our method achieves 1.50× and 3.32× speedup over PanguLU and SuperLU_DIST, respectively; with four GPUs, it attains 1.40× and 3.84× speedup, demonstrating significantly improved parallel scalability and efficiency.
📝 Abstract
In sparse LU factorization, nonzero elements after symbolic factorization tend to distribute in diagonal and right-bottom region of sparse matrices. However, regular 2D blocking on this non-uniform distribution structure may lead to workload imbalance across blocks. Besides, existing matrix features fail to guide us effectively in blocking. In this paper, we propose a structure-aware irregular blocking method for numerical factorization. A novel diagonal block-based feature is introduced to effectively characterize the local nonzero distribution of sparse matrices. Based on this, we further propose an irregular blocking method that adjusts block sizes according to the local distribution of nonzeros. The strategy utilizes fine-grained blocks in dense regions and coarse-grained blocks in sparse regions, adequately balancing the nonzeros of blocks both within the same level and across levels in the dependency tree. Experiments demonstrate that, on a single NVIDIA A100 GPU, our proposed irregular blocking method achieves average speedups of 1.50x and 3.32x over PanguLU and the latest SuperLU_DIST, respectively. In addition, it achieves speedups of 1.40x and 3.84x over PanguLU and SuperLU_DIST on 4 NVIDIA A100 GPUs.