Prism: Spectral-Aware Block-Sparse Attention

📅 2026-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing block-sparse attention methods rely on coarse-grained attention to estimate block importance but often incur high computational overhead due to expensive token-level search. This work proposes a training-free, spectrum-aware block selection approach that addresses this limitation. Through theoretical analysis, we identify that the interaction between mean pooling and Rotary Position Embedding (RoPE) leads to attenuation of high-frequency positional information. To mitigate this, we decompose block selection into high- and low-frequency branches and introduce an energy-driven temperature calibration mechanism to recover positional signals from pooled representations. Our method efficiently and accurately estimates block importance using only block-level operations, achieving up to 5.1× inference speedup while maintaining accuracy comparable to full attention.

Technology Category

Application Category

📝 Abstract
Block-sparse attention is promising for accelerating long-context LLM pre-filling, yet identifying relevant blocks efficiently remains a bottleneck. Existing methods typically employ coarse-grained attention as a proxy for block importance estimation, but often resort to expensive token-level searching or scoring, resulting in significant selection overhead. In this work, we trace the inaccuracy of standard coarse-grained attention via mean pooling to a theoretical root cause: the interaction between mean pooling and Rotary Positional Embeddings (RoPE). We prove that mean pooling acts as a low-pass filter that induces destructive interference in high-frequency dimensions, effectively creating a"blind spot"for local positional information (e.g., slash patterns). To address this, we introduce Prism, a training-free spectral-aware approach that decomposes block selection into high-frequency and low-frequency branches. By applying energy-based temperature calibration, Prism restores the attenuated positional signals directly from pooled representations, enabling block importance estimation using purely block-level operations, thereby improving efficiency. Extensive evaluations confirm that Prism maintains accuracy parity with full attention while delivering up to $\mathbf{5.1\times}$ speedup.
Problem

Research questions and friction points this paper is trying to address.

block-sparse attention
long-context LLM
coarse-grained attention
block importance estimation
selection overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

block-sparse attention
spectral-aware
Rotary Positional Embeddings
mean pooling
training-free
🔎 Similar Papers
No similar papers found.