๐ค AI Summary
To address the challenge of simultaneously achieving high performance and flexibility for sparse matrix multiplication (SpMM/SDDMM) on GPUs, this paper introduces Libraโthe first sparse computation framework that synergistically leverages both CUDA Cores and Tensor Cores. Methodologically, Libra features: (1) a 2D-aware task partitioning strategy enabling fine-grained load balancing across heterogeneous compute units; and (2) hybrid kernel optimization coupled with GPU-accelerated preprocessing to balance high throughput with low computational redundancy. Evaluated on NVIDIA H100 and RTX 4090 platforms, Libra achieves an average speedup of 3.1ร over DTC-SpMM, with peak improvements reaching 9.23ร. For end-to-end graph neural network inference, it delivers up to 3.9ร speedup, consistently outperforming state-of-the-art approaches. These results demonstrate Libraโs effectiveness in unlocking the combined computational potential of modern GPU architectures for sparse workloads.
๐ Abstract
Sparse matrix multiplication operators (i.e., SpMM and SDDMM) are widely used in deep learning and scientific computing. Modern accelerators are commonly equipped with Tensor cores and CUDA cores to accelerate sparse operators. The former brings superior computing power but only for structured matrix multiplication, while the latter has relatively lower performance but with higher programming flexibility. In this work, we discover that utilizing one resource alone leads to inferior performance for sparse matrix multiplication, due to their respective limitations. To this end, we propose Libra, a systematic approach that enables synergistic computation between CUDA and Tensor cores to achieve the best performance for sparse matrix multiplication. Specifically, we propose a 2D-aware workload distribution strategy to find out the sweet point of task mapping for different sparse operators, leveraging both the high performance of Tensor cores and the low computational redundancy on CUDA cores. In addition, Libra incorporates systematic optimizations for heterogeneous computing, including hybrid load-balancing, finely optimized kernel implementations, and GPU-accelerated preprocessing. Extensive experimental results on H100 and RTX 4090 GPUs show that Libra outperforms the state-of-the-art by on average 3.1x (up to 9.23x) over DTC-SpMM and 2.9x (up to 3.9x) for end-to-end GNN applications. Libra opens up a new perspective for sparse operator acceleration by fully exploiting the heterogeneous computing resources on GPUs.