DyLLM: Efficient Diffusion LLM Inference via Saliency-based Token Selection and Partial Attention

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models suffer from inefficient inference due to redundant recomputation of the entire sequence during iterative denoising. This work proposes a training-free acceleration framework that, for the first time, exploits the temporal sparsity of token representations across diffusion steps. By dynamically identifying salient tokens via cosine similarity of attention contexts, the method performs full feed-forward and attention computations only on these tokens, while reusing cached activation values for the rest. Integrating saliency-based token selection, partial attention mechanisms, and activation caching, the approach achieves up to 9.6× higher throughput on multiple reasoning and code generation benchmarks, while largely preserving the original accuracy of state-of-the-art models such as LLaDA and Dream.

Technology Category

Application Category

📝 Abstract
Masked Diffusion Language Models (MDLMs) enable parallel token decoding, providing a promising alternative to the sequential nature of autoregressive generation. However, their iterative denoising process remains computationally expensive because it repeatedly processes the entire sequence at every step. We observe that across these diffusion steps, most token representations remain stable; only a small subset, which we term salient tokens, contributes meaningfully to the next update. Leveraging this temporal sparsity, we present DyLLM, a training-free inference framework that accelerates decoding by selectively computing only these salient tokens. DyLLM identifies saliency by measuring the cosine similarity of attention contexts between adjacent denoising steps. It recomputes feed-forward and attention operations only for salient tokens while reusing cached activations for the remainder. Across diverse reasoning and code-generation benchmarks, DyLLM achieves up to 9.6x higher throughput while largely preserving the baseline accuracy of state-of-the-art models like LLaDA and Dream.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Language Models
Inference Efficiency
Token Selection
Computational Cost
Parallel Decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
Token Saliency
Efficient Inference
Partial Attention
Temporal Sparsity
Y
Younjoo Lee
Seoul National University
J
Junghoo Lee
Seoul National University
S
Seungkyun Dan
Seoul National University
J
Jaiyoung Park
Seoul National University
Jung Ho Ahn
Jung Ho Ahn
Seoul National University
Computer Architecture