SHIRO: Near-Optimal Communication Strategies for Distributed Sparse Matrix Multiplication

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distributed sparse matrix multiplication (SpMM) suffers from high communication overhead and poor scalability. This paper proposes a communication optimization framework that jointly leverages sparsity awareness and a two-level network hierarchy. It introduces the first sparse-pattern-driven, fine-grained communication pruning technique, tightly coupling sparsity-aware scheduling with GPU cluster infrastructure—specifically, intra-node NVLink and inter-node InfiniBand. By incorporating communication-topology-aware scheduling and hierarchical aggregation, the method significantly reduces redundant data transfers across slow inter-node links. Evaluated on 128 GPUs, it achieves geometric-mean speedups of 221.5×, 56.0×, 23.4×, and 8.8× over CAGNET, SPA, BCL, and CoLa, respectively, while demonstrating strong linear scalability.

Technology Category

Application Category

📝 Abstract
Distributed Sparse Matrix-Matrix Multiplication (SpMM) is a fundamental operation in numerous high-performance computing and deep learning applications. The major performance bottleneck in distributed SpMM lies in the substantial communication overhead, which limits both performance and scalability. In this paper, we identify and analyze sources of inefficient communication in existing distributed SpMM implementations at two levels and address these inefficiencies by proposing: (1) a fine-grained, sparsity-aware communication strategy that reduces communication overhead by exploiting the sparsity pattern of the sparse matrix, and (2) a hierarchical communication strategy that integrates the sparsity-aware strategy with the common two-tier network architectures in GPU-accelerated systems, to reduce redundant communication across slow network links. We implement these optimizations in a comprehensive distributed SpMM framework, method{}. Extensive evaluations on real-world datasets show that our framework demonstrates strong scalability up to 128 GPUs, achieving geometric mean speedups of 221.5$ imes$, 56.0$ imes$, 23.4$ imes$, and 8.8$ imes$ over four state-of-the-art baselines (CAGNET, SPA, BCL, and CoLa, respectively) at this scale.
Problem

Research questions and friction points this paper is trying to address.

Addresses high communication overhead in distributed sparse matrix multiplication.
Proposes sparsity-aware and hierarchical strategies to reduce network inefficiencies.
Enhances scalability and performance for GPU-accelerated systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained sparsity-aware communication strategy reduces overhead
Hierarchical communication strategy integrates with two-tier GPU network architectures
Framework demonstrates strong scalability up to 128 GPUs with significant speedups
🔎 Similar Papers
No similar papers found.