🤖 AI Summary
To address performance bottlenecks in sparse general matrix-matrix multiplication (SpGEMM) on GPUs—primarily caused by irregular memory access patterns—this work proposes a hardware-software co-design acceleration framework. It introduces AIA, a novel near-HBM customized in-memory processing unit, coupled with a hash-driven multi-stage computation scheduling strategy. This design transcends the limitations of purely software-based optimizations by enabling deep memory-compute co-scheduling within GPU heterogeneous architectures. Experimental evaluation demonstrates speedups of 76.5% and 58.4% over cuSPARSE for graph contraction and Markov clustering, respectively. For GNN training, average speedups reach 1.43× over baseline software implementations and 1.95× over cuSPARSE, scaling up to 4.18× in large-scale scenarios. The framework establishes an efficient, scalable in-memory acceleration paradigm for SpGEMM in graph analytics and GNN workloads.
📝 Abstract
Sparse General Matrix-Matrix Multiplication (SpGEMM) is a fundamental operation in numerous scientific computing and data analytics applications, often bottlenecked by irregular memory access patterns. This paper presents Hash based Multi-phase SpGEMM on GPU and the Acceleration of Indirect Memory Access (AIA) technique, a novel custom near-memory processing approach to optimizing SpGEMM on GPU HBM. Our hardware-software co-designed framework for SpGEMM demonstrates significant performance improvements over state-of-the-art methods, particularly in handling complex, application-specific workloads. We evaluate our approach on various graph workloads, including graph contraction, Markov clustering, and Graph Neural Networks (GNNs), showcasing its practical applicability. For graph analytics applications, AIA demonstrates up to 17.3% time reduction from the software-only implementation, while achieving time reduction of 76.5% for Graph Contraction and 58.4% for Markov Clustering compared to cuSPARSE. For GNN training applications with structured global pruning, our hybrid approach delivers an average of 1.43x speedup over software-only implementation across six benchmark datasets and three architectures (GCN, GIN, GraphSAGE), and shows 1.95x speedup for GNN workloads when compared to cuSPARSE, with up to 4.18x gains on large-scale datasets.