🤖 AI Summary
Diffusion language models, due to their non-causal architecture, cannot leverage standard key-value (KV) caching, necessitating full recomputation of hidden states at each decoding step. Existing caching approaches suffer from high overhead in per-token update identification and rigid, uniform cache budget allocation. To address these limitations, this work proposes a joint optimization strategy that employs a low-dimensional singular proxy to rapidly identify critical tokens requiring updates and adaptively reduces update frequency for stable layers, enabling dynamic cache budget allocation. The method substantially reduces computational overhead while preserving generation quality, achieving up to an 8× speedup over naive decoding and improving throughput by 2–4× compared to current caching baselines.
📝 Abstract
While Diffusion Language Models (DLMs) offer a flexible, arbitrary-order alternative to the autoregressive paradigm, their non-causal nature precludes standard KV caching, forcing costly hidden state recomputation at every decoding step. Existing DLM caching approaches reduce this cost by selective hidden state updates; however, they are still limited by (i) costly token-wise update identification heuristics and (ii) rigid, uniform budget allocation that fails to account for heterogeneous hidden state dynamics. To address these challenges, we present SPA-Cache that jointly optimizes update identification and budget allocation in DLM cache. First, we derive a low-dimensional singular proxy that enables the identification of update-critical tokens in a low-dimensional subspace, substantially reducing the overhead of update identification. Second, we introduce an adaptive strategy that allocates fewer updates to stable layers without degrading generation quality. Together, these contributions significantly improve the efficiency of DLMs, yielding up to an $8\times$ throughput improvement over vanilla decoding and a $2$--$4\times$ speedup over existing caching baselines.