No Cache Left Idle: Accelerating diffusion model via Extreme-slimming Caching

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion model inference suffers from severe computational redundancy; existing feature caching methods struggle to balance acceleration and generation quality—aggressive timestep reuse introduces distortion, while safer block- or token-level reuse yields limited speedup. This paper proposes X-Slim, a training-agnostic caching framework that jointly models redundancy across timesteps, network blocks, and spatial tokens. It introduces the first “push-then-polish” dual-threshold dynamic control mechanism, enabling adaptive caching scheduling: aggressive feature reuse followed by lightweight correction and precise reset. Without fine-tuning, X-Slim achieves the first multi-level redundancy co-compression. Evaluated on FLUX.1-dev, HunyuanVideo, and DiT-XL/2, it reduces latency by 4.97×, 3.52×, and 3.13×, respectively, while improving FID by 2.42—significantly advancing the speed–quality Pareto frontier.

Technology Category

Application Category

📝 Abstract
Diffusion models achieve remarkable generative quality, but computational overhead scales with step count, model depth, and sequence length. Feature caching is effective since adjacent timesteps yield highly similar features. However, an inherent trade-off remains: aggressive timestep reuse offers large speedups but can easily cross the critical line, hurting fidelity, while block- or token-level reuse is safer but yields limited computational savings. We present X-Slim (eXtreme-Slimming Caching), a training-free, cache-based accelerator that, to our knowledge, is the first unified framework to exploit cacheable redundancy across timesteps, structure (blocks), and space (tokens). Rather than simply mixing levels, X-Slim introduces a dual-threshold controller that turns caching into a push-then-polish process: it first pushes reuse at the timestep level up to an early-warning line, then switches to lightweight block- and token-level refresh to polish the remaining redundancy, and triggers full inference once the critical line is crossed to reset accumulated error. At each level, context-aware indicators decide when and where to cache. Across diverse tasks, X-Slim advances the speed-quality frontier. On FLUX.1-dev and HunyuanVideo, it reduces latency by up to 4.97x and 3.52x with minimal perceptual loss. On DiT-XL/2, it reaches 3.13x acceleration and improves FID by 2.42 over prior methods.
Problem

Research questions and friction points this paper is trying to address.

Accelerates diffusion models by exploiting redundancy across timesteps, blocks, and tokens
Balances speed and fidelity via a dual-threshold caching controller
Reduces computational latency while maintaining generative quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-threshold controller enables push-then-polish caching process
Unified framework exploits redundancy across timesteps, blocks, and tokens
Context-aware indicators determine caching timing and locations
🔎 Similar Papers
No similar papers found.
T
Tingyan Wen
Tsinghua University
H
Haoyu Li
Tsinghua University
Y
Yihuang Chen
Central Media Technology Institute, Huawei
Xing Zhou
Xing Zhou
Computer Science, University of Illinois at Urbana-Champaign
Compiler Optimizations
L
Lifei Zhu
Central Media Technology Institute, Huawei
Xueqian Wang
Xueqian Wang
Tsinghua University
Information FusionTarget DetectionRadar ImagingImage Processing