SPA-Cache: Singular Proxies for Adaptive Caching in Diffusion Language Models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion language models, due to their non-causal architecture, cannot leverage standard key-value (KV) caching, necessitating full recomputation of hidden states at each decoding step. Existing caching approaches suffer from high overhead in per-token update identification and rigid, uniform cache budget allocation. To address these limitations, this work proposes a joint optimization strategy that employs a low-dimensional singular proxy to rapidly identify critical tokens requiring updates and adaptively reduces update frequency for stable layers, enabling dynamic cache budget allocation. The method substantially reduces computational overhead while preserving generation quality, achieving up to an 8× speedup over naive decoding and improving throughput by 2–4× compared to current caching baselines.

Technology Category

Application Category

📝 Abstract
While Diffusion Language Models (DLMs) offer a flexible, arbitrary-order alternative to the autoregressive paradigm, their non-causal nature precludes standard KV caching, forcing costly hidden state recomputation at every decoding step. Existing DLM caching approaches reduce this cost by selective hidden state updates; however, they are still limited by (i) costly token-wise update identification heuristics and (ii) rigid, uniform budget allocation that fails to account for heterogeneous hidden state dynamics. To address these challenges, we present SPA-Cache that jointly optimizes update identification and budget allocation in DLM cache. First, we derive a low-dimensional singular proxy that enables the identification of update-critical tokens in a low-dimensional subspace, substantially reducing the overhead of update identification. Second, we introduce an adaptive strategy that allocates fewer updates to stable layers without degrading generation quality. Together, these contributions significantly improve the efficiency of DLMs, yielding up to an $8\times$ throughput improvement over vanilla decoding and a $2$--$4\times$ speedup over existing caching baselines.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Language Models
KV caching
hidden state recomputation
update identification
budget allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Language Models
KV caching
singular proxy
adaptive caching
throughput optimization
🔎 Similar Papers
No similar papers found.
W
Wenhao Sun
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
Rong-Cheng Tu
Rong-Cheng Tu
Nanyang Technological University
Image and Video RetrievalCross-modal RetrievalDeep Learning
Y
Yifu Ding
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
Z
Zhao Jin
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
J
Jingyi Liao
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
Y
Yongcheng Jing
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore
Dacheng Tao
Dacheng Tao
Nanyang Technological University
artificial intelligencemachine learningcomputer visionimage processingdata mining