🤖 AI Summary
High development overhead for sparse GPU kernels, coupled with the CPU-centric design of mainstream sparse compilers, hinders efficient GPU code generation and joint optimization of hybrid sparse-dense computations. This paper introduces *indirect Einsum*, a unified abstraction that models sparse operations as dense tensor contractions augmented with index mappings. Based on this, we propose two novel sparse formats—GroupCOO and BlockGroupCOO—and design *Insum*, a format-agnostic compiler integrated into PyTorch and optimized for Tensor Core acceleration. Our key contribution is a declarative, indirect indexing mechanism that decouples sparse structure from computation logic, enabling end-to-end automatic optimization. Experimental evaluation across diverse sparse GPU workloads demonstrates 1.14×–3.81× speedup over state-of-the-art baselines, while reducing implementation code size by 202×–4491× compared to hand-tuned kernels.
📝 Abstract
Programming high-performance sparse GPU kernels is notoriously difficult, requiring both substantial effort and deep expertise. Sparse compilers aim to simplify this process, but existing systems fall short in two key ways. First, they are primarily designed for CPUs and rarely produce high-performance GPU code. Second, when computations involve both sparse and dense regions, these compilers often fail to optimize the dense portions effectively. In this paper, we propose a new approach for expressing sparse computations. We start from format-agnostic Einsums over sparse tensors and rewrite them into format-conscious indirect Einsums, which explicitly encode format information by mapping sparse data and metadata onto dense tensor operations through indirect indexing. To execute indirect Einsums, we introduce the Insum compiler, which generates efficient GPU code for these Einsums by lowering to the PyTorch compiler, extended to better support Tensor Core-enabled indirect Einsums. We also present two fixed-length sparse formats, GroupCOO and BlockGroupCOO, designed to fit naturally with indirect Einsums. Our approach achieves 1.14x to 3.81x speedups across a range of sparse GPU applications while reducing lines of code by 202x to 4491x compared to hand-written implementations.