Insum: Sparse GPU Kernels Simplified and Optimized with Indirect Einsums

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High development overhead for sparse GPU kernels, coupled with the CPU-centric design of mainstream sparse compilers, hinders efficient GPU code generation and joint optimization of hybrid sparse-dense computations. This paper introduces *indirect Einsum*, a unified abstraction that models sparse operations as dense tensor contractions augmented with index mappings. Based on this, we propose two novel sparse formats—GroupCOO and BlockGroupCOO—and design *Insum*, a format-agnostic compiler integrated into PyTorch and optimized for Tensor Core acceleration. Our key contribution is a declarative, indirect indexing mechanism that decouples sparse structure from computation logic, enabling end-to-end automatic optimization. Experimental evaluation across diverse sparse GPU workloads demonstrates 1.14×–3.81× speedup over state-of-the-art baselines, while reducing implementation code size by 202×–4491× compared to hand-tuned kernels.

Technology Category

Application Category

📝 Abstract
Programming high-performance sparse GPU kernels is notoriously difficult, requiring both substantial effort and deep expertise. Sparse compilers aim to simplify this process, but existing systems fall short in two key ways. First, they are primarily designed for CPUs and rarely produce high-performance GPU code. Second, when computations involve both sparse and dense regions, these compilers often fail to optimize the dense portions effectively. In this paper, we propose a new approach for expressing sparse computations. We start from format-agnostic Einsums over sparse tensors and rewrite them into format-conscious indirect Einsums, which explicitly encode format information by mapping sparse data and metadata onto dense tensor operations through indirect indexing. To execute indirect Einsums, we introduce the Insum compiler, which generates efficient GPU code for these Einsums by lowering to the PyTorch compiler, extended to better support Tensor Core-enabled indirect Einsums. We also present two fixed-length sparse formats, GroupCOO and BlockGroupCOO, designed to fit naturally with indirect Einsums. Our approach achieves 1.14x to 3.81x speedups across a range of sparse GPU applications while reducing lines of code by 202x to 4491x compared to hand-written implementations.
Problem

Research questions and friction points this paper is trying to address.

Simplifying sparse GPU kernel programming using indirect Einsums
Optimizing dense portions in sparse-dense computation fusion
Generating efficient GPU code for sparse tensor operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Indirect Einsums map sparse data via indexing
Insum compiler generates efficient GPU code
GroupCOO formats optimize sparse tensor operations
🔎 Similar Papers
No similar papers found.
J
Jaeyeon Won
Massachusetts Institute of Technology, Cambridge, USA
Willow Ahrens
Willow Ahrens
Assistant Professor at Georgia Tech
Programming LanguagesHigh Performance ComputingSparse Linear AlgebraCompilers
J
Joel S. Emer
Massachusetts Institute of Technology, Cambridge, USA and NVIDIA, Westford, USA
Saman Amarasinghe
Saman Amarasinghe
MIT
CompilersPerformance EngineeringProgramming LanguagesParallel ComputingDomain-Specific Languages