Sink-Aware Pruning for Diffusion Language Models

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models suffer from high inference costs due to their iterative denoising mechanism, and existing pruning approaches—borrowed from autoregressive models—often retain attention sink tokens that exhibit significant temporal instability. This work is the first to reveal the pronounced temporal instability of attention sinks in diffusion language models and proposes a training-free pruning method that analyzes the variance of these sinks across timesteps to automatically identify and remove tokens with low structural importance and high fluctuation. By departing from conventional autoregressive pruning paradigms, the proposed approach achieves a superior trade-off between generation quality and efficiency, significantly outperforming strong baselines under identical computational budgets.

Technology Category

Application Category

📝 Abstract
Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Language Models
pruning
attention sink
inference efficiency
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Pruning
Attention Sinks
Inference Efficiency
Sink-Aware Pruning
🔎 Similar Papers
No similar papers found.