Libra: Synergizing CUDA and Tensor Cores for High-Performance Sparse Matrix Multiplication

๐Ÿ“… 2025-06-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of simultaneously achieving high performance and flexibility for sparse matrix multiplication (SpMM/SDDMM) on GPUs, this paper introduces Libraโ€”the first sparse computation framework that synergistically leverages both CUDA Cores and Tensor Cores. Methodologically, Libra features: (1) a 2D-aware task partitioning strategy enabling fine-grained load balancing across heterogeneous compute units; and (2) hybrid kernel optimization coupled with GPU-accelerated preprocessing to balance high throughput with low computational redundancy. Evaluated on NVIDIA H100 and RTX 4090 platforms, Libra achieves an average speedup of 3.1ร— over DTC-SpMM, with peak improvements reaching 9.23ร—. For end-to-end graph neural network inference, it delivers up to 3.9ร— speedup, consistently outperforming state-of-the-art approaches. These results demonstrate Libraโ€™s effectiveness in unlocking the combined computational potential of modern GPU architectures for sparse workloads.

Technology Category

Application Category

๐Ÿ“ Abstract
Sparse matrix multiplication operators (i.e., SpMM and SDDMM) are widely used in deep learning and scientific computing. Modern accelerators are commonly equipped with Tensor cores and CUDA cores to accelerate sparse operators. The former brings superior computing power but only for structured matrix multiplication, while the latter has relatively lower performance but with higher programming flexibility. In this work, we discover that utilizing one resource alone leads to inferior performance for sparse matrix multiplication, due to their respective limitations. To this end, we propose Libra, a systematic approach that enables synergistic computation between CUDA and Tensor cores to achieve the best performance for sparse matrix multiplication. Specifically, we propose a 2D-aware workload distribution strategy to find out the sweet point of task mapping for different sparse operators, leveraging both the high performance of Tensor cores and the low computational redundancy on CUDA cores. In addition, Libra incorporates systematic optimizations for heterogeneous computing, including hybrid load-balancing, finely optimized kernel implementations, and GPU-accelerated preprocessing. Extensive experimental results on H100 and RTX 4090 GPUs show that Libra outperforms the state-of-the-art by on average 3.1x (up to 9.23x) over DTC-SpMM and 2.9x (up to 3.9x) for end-to-end GNN applications. Libra opens up a new perspective for sparse operator acceleration by fully exploiting the heterogeneous computing resources on GPUs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sparse matrix multiplication using CUDA and Tensor cores
Balancing workload between high-performance and flexible computing resources
Enhancing performance for deep learning and scientific computing applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synergizes CUDA and Tensor cores for SpMM
Uses 2D-aware workload distribution strategy
Incorporates hybrid load-balancing optimizations
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jinliang Shi
Beijing University of Posts and Telecommunications, Beijing, China
Shigang Li
Shigang Li
Professor, ParCIS Lab, Beijing University of Posts and Telecommunications
High Performance ComputingDeep Learning SystemsParallel ComputingComputer Architecture
Y
Youxuan Xu
Beijing University of Posts and Telecommunications, Beijing, China
X
Xueying Wang
Beijing University of Posts and Telecommunications, Beijing, China
R
Rongtian Fu
Beijing University of Posts and Telecommunications, Beijing, China
Zhi Ma
Zhi Ma
China Mobile (Hangzhou) Information Technology Co., Ltd.
Edge Intelligence Deep Learning LLM
T
Tong Wu
Beijing University of Posts and Telecommunications, Beijing, China