Accelerating Sparse Matrix-Matrix Multiplication on GPUs with Processing Near HBMs

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance bottlenecks in sparse general matrix-matrix multiplication (SpGEMM) on GPUs—primarily caused by irregular memory access patterns—this work proposes a hardware-software co-design acceleration framework. It introduces AIA, a novel near-HBM customized in-memory processing unit, coupled with a hash-driven multi-stage computation scheduling strategy. This design transcends the limitations of purely software-based optimizations by enabling deep memory-compute co-scheduling within GPU heterogeneous architectures. Experimental evaluation demonstrates speedups of 76.5% and 58.4% over cuSPARSE for graph contraction and Markov clustering, respectively. For GNN training, average speedups reach 1.43× over baseline software implementations and 1.95× over cuSPARSE, scaling up to 4.18× in large-scale scenarios. The framework establishes an efficient, scalable in-memory acceleration paradigm for SpGEMM in graph analytics and GNN workloads.

Technology Category

Application Category

📝 Abstract
Sparse General Matrix-Matrix Multiplication (SpGEMM) is a fundamental operation in numerous scientific computing and data analytics applications, often bottlenecked by irregular memory access patterns. This paper presents Hash based Multi-phase SpGEMM on GPU and the Acceleration of Indirect Memory Access (AIA) technique, a novel custom near-memory processing approach to optimizing SpGEMM on GPU HBM. Our hardware-software co-designed framework for SpGEMM demonstrates significant performance improvements over state-of-the-art methods, particularly in handling complex, application-specific workloads. We evaluate our approach on various graph workloads, including graph contraction, Markov clustering, and Graph Neural Networks (GNNs), showcasing its practical applicability. For graph analytics applications, AIA demonstrates up to 17.3% time reduction from the software-only implementation, while achieving time reduction of 76.5% for Graph Contraction and 58.4% for Markov Clustering compared to cuSPARSE. For GNN training applications with structured global pruning, our hybrid approach delivers an average of 1.43x speedup over software-only implementation across six benchmark datasets and three architectures (GCN, GIN, GraphSAGE), and shows 1.95x speedup for GNN workloads when compared to cuSPARSE, with up to 4.18x gains on large-scale datasets.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sparse matrix multiplication on GPUs with near-memory processing
Reducing irregular memory access bottlenecks in scientific computing applications
Accelerating graph analytics and neural network training workloads
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hash-based multi-phase SpGEMM algorithm on GPU
Acceleration of Indirect Memory Access near HBM
Hardware-software co-designed framework for SpGEMM optimization
🔎 Similar Papers
No similar papers found.
Shiju Li
Shiju Li
SOLAB, SK hynix America, San Jose, USA
Y
Younghoon Min
SOLAB, SK hynix America, San Jose, USA
H
Hane Yie
SOLAB, SK hynix America, San Jose, USA
H
Hoshik Kim
SOLAB, SK hynix America, San Jose, USA
S
Soohong Ahn
AMS, SK hynix, Icheon, Korea
Joonseop Sim
Joonseop Sim
SK Hynix
Computer architectureMemory HierarchyData analytics
Chul-Ho Lee
Chul-Ho Lee
Computer Science, Texas State University
Graph MiningMachine LearningNetworkingSystems
J
Jongryool Kim
SOLAB, SK hynix America, San Jose, USA