A Unified Sparse Attention via Multi-Granularity Compression

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inference bottleneck caused by the quadratic computational complexity $O(n^2)$ of self-attention in long-context scenarios, this paper proposes UniSparse, a unified sparse attention mechanism. Methodologically, UniSparse introduces three key innovations: (1) a novel composite token abstraction enabling training-agnostic, plug-and-play cross-modal sparsification; (2) multi-granularity context compression coupled with dynamic block-level sparse selection to yield hardware-friendly sparsity patterns; and (3) GPU-optimized sparse kernels for efficient execution. Empirically, UniSparse achieves ≥99% accuracy relative to dense attention while accelerating inference by 2.61× over FlashAttention. It significantly outperforms state-of-the-art methods—including MInference and XAttention—across diverse long-context tasks such as multi-turn dialogue and program analysis, demonstrating strong generalization and practical utility.

Technology Category

Application Category

📝 Abstract
Efficient long-context understanding and reasoning are increasingly vital for large language model (LLM) applications such as multi-turn dialogue and program analysis. However, the core self-attention mechanism scales quadratically with sequence length, creating a fundamental computational bottleneck. Existing sparse attention methods alleviate this issue but face trade-offs: training-based methods are costly and cannot be directly applied as acceleration plugins for other models, while inference-time methods often compromise efficiency or cross-modal generality. To address these limitations, we present UniSparse, a unified mechanism that introduces the notion of composite tokens--compact representations that aggregate multi-granularity contextual information. Building on this abstraction, UniSparse dynamically constructs sparse attention through multi-granularity compression and block-level selection, enabling efficient and hardware-friendly execution on GPU. Across multiple modalities and tasks ranging from synthetic benchmarks to real-world applications, UniSparse consistently surpasses state-of-the-art sparse attention methods (e.g., MInference, XAttention, FlexPrefill) in both accuracy and efficiency, achieving $ge$ 99% of full-attention accuracy and up to 2.61$ imes$ faster attention computation than FlashAttention.
Problem

Research questions and friction points this paper is trying to address.

Addresses quadratic scaling of self-attention in LLMs
Unifies training and inference sparse attention methods
Enables efficient cross-modal long-context understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified sparse attention via composite tokens
Multi-granularity compression for dynamic construction
Hardware-friendly GPU execution with block-level selection
🔎 Similar Papers
No similar papers found.
S
Siran Liu
Peking University
Z
Zane Cao
SCITIX (SGP) TECH PTE. LTD.
Yongchao He
Yongchao He
Tsinghua University
AI Infra