d$^2$Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based large language models (dLLMs) suffer from inefficient inference due to their reliance on bidirectional attention, which precludes standard KV caching. To address this, we propose a **training-free, dual-adaptive KV caching framework** that enables efficient inference via a two-stage mechanism: (1) fine-grained token importance estimation dynamically determines optimal KV storage positions; and (2) adaptive KV state updates during generation support quasi left-to-right decoding, mitigating overconfidence in late-stage tokens. Our method is fully compatible with existing dLLMs (e.g., LLaDA, Dream), requires no architectural or parametric modifications, and is deployed solely at inference time. Experiments demonstrate consistent improvements in both generation quality and inference speed—up to 2.1× acceleration—across diverse benchmarks. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Diffusion-based large language models (dLLMs), despite their promising performance, still suffer from inferior inference efficiency. This is because dLLMs rely on bidirectional attention and cannot directly benefit from the standard key-value (KV) cache as autoregressive models (ARMs) do. To tackle this issue, we introduce extit{Dual aDaptive Cache} (d$^2$Cache), which is a training-free approximate KV cache framework for accelerating dLLM inference. d$^2$Cache features a two-stage fine-grained selection strategy to identify tokens and adaptively update their KV states at each decoding step, while caching the KV states of the remaining tokens for reuse. Furthermore, d$^2$Cache naturally offers a more reliable decoding alternative, which can enable quasi left-to-right generation and mitigate premature overconfidence in tokens at the end of the sequence. Extensive experimental results on two representative dLLMs (ie, LLaDA and Dream) demonstrate that d$^2$Cache not only achieves substantial inference speedups, but also yields consistent improvements in generation quality. The code is available at https://github.com/Kamichanw/d2Cache.
Problem

Research questions and friction points this paper is trying to address.

Accelerating diffusion-based LLM inference efficiency
Enabling KV cache for bidirectional attention models
Mitigating premature overconfidence in sequence generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free KV cache framework for diffusion LLMs
Two-stage fine-grained token selection strategy
Adaptive KV state updates with caching reuse
🔎 Similar Papers
No similar papers found.
Yuchu Jiang
Yuchu Jiang
Southeast University
Large Language ModelsComputer Vision
Y
Yue Cai
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Southeast University
Xiangzhong Luo
Xiangzhong Luo
Nanyang Technological University
Jiale Fu
Jiale Fu
Southeast University
speculative decodingLLM reasoning
J
Jiarui Wang
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Southeast University
C
Chonghan Liu
Qiyuan Tech
X
Xu Yang
Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Southeast University