Fused3S: Fast Sparse Attention on Tensor Cores

📅 2025-05-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse attention’s three-stage computation—SDDMM, Softmax, and SpMM—suffers from low tensor-core utilization and high data movement overhead on GPUs. Method: This paper proposes 3S, the first end-to-end fused algorithm for these stages, enabling full pipeline fusion while preserving unstructured sparsity. It jointly optimizes memory access via custom CUDA+Tensor Core sparse operator fusion, register-level data reuse scheduling, and sparse block-layout-aware kernel design. Contribution/Results: On H100 and A30 GPUs, 3S achieves 1.6×–16.3× speedup over state-of-the-art sparse attention implementations. Integrated into Graph Transformer backends, it delivers 1.05×–5.36× end-to-end acceleration across single-graph and batched-graph workloads, supporting multiple GPU architectures. The approach balances generality and performance without sacrificing sparsity structure or hardware efficiency.

Technology Category

Application Category

📝 Abstract
Sparse attention is a core building block in many leading neural network models, from graph-structured learning to sparse sequence modeling. It can be decomposed into a sequence of three sparse matrix operations (3S): sampled dense-dense matrix multiplication (SDDMM), softmax normalization, and sparse matrix multiplication (SpMM). Efficiently executing the 3S computational pattern on modern GPUs remains challenging due to (a) the mismatch between unstructured sparsity and tensor cores optimized for dense operations, and (b) the high cost of data movement. Previous works have optimized these sparse operations individually or addressed one of these challenges. This paper introduces Fused3S, the first fused 3S algorithm that jointly maximizes tensor core utilization and minimizes data movement. Across real-world graph datasets, Fused3S achieves $1.6- 16.3 imes$ and $1.5-14 imes$ speedup over state-of-the-art on H100 and A30 GPUs. Furthermore, integrating Fused3S into Graph Transformer inference accelerates end-to-end performance by $1.05-5.36 imes$, consistently outperforming all 3S baselines across diverse datasets (single and batched graphs) and GPU architectures.
Problem

Research questions and friction points this paper is trying to address.

Optimizing sparse attention for tensor cores
Reducing data movement in sparse matrix operations
Fusing three sparse matrix operations efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fused3S integrates three sparse matrix operations
Maximizes tensor core utilization efficiently
Minimizes data movement for speedup
🔎 Similar Papers
No similar papers found.