🤖 AI Summary
Diffusion model inference suffers from severe computational redundancy; existing feature caching methods struggle to balance acceleration and generation quality—aggressive timestep reuse introduces distortion, while safer block- or token-level reuse yields limited speedup. This paper proposes X-Slim, a training-agnostic caching framework that jointly models redundancy across timesteps, network blocks, and spatial tokens. It introduces the first “push-then-polish” dual-threshold dynamic control mechanism, enabling adaptive caching scheduling: aggressive feature reuse followed by lightweight correction and precise reset. Without fine-tuning, X-Slim achieves the first multi-level redundancy co-compression. Evaluated on FLUX.1-dev, HunyuanVideo, and DiT-XL/2, it reduces latency by 4.97×, 3.52×, and 3.13×, respectively, while improving FID by 2.42—significantly advancing the speed–quality Pareto frontier.
📝 Abstract
Diffusion models achieve remarkable generative quality, but computational overhead scales with step count, model depth, and sequence length. Feature caching is effective since adjacent timesteps yield highly similar features. However, an inherent trade-off remains: aggressive timestep reuse offers large speedups but can easily cross the critical line, hurting fidelity, while block- or token-level reuse is safer but yields limited computational savings. We present X-Slim (eXtreme-Slimming Caching), a training-free, cache-based accelerator that, to our knowledge, is the first unified framework to exploit cacheable redundancy across timesteps, structure (blocks), and space (tokens). Rather than simply mixing levels, X-Slim introduces a dual-threshold controller that turns caching into a push-then-polish process: it first pushes reuse at the timestep level up to an early-warning line, then switches to lightweight block- and token-level refresh to polish the remaining redundancy, and triggers full inference once the critical line is crossed to reset accumulated error. At each level, context-aware indicators decide when and where to cache. Across diverse tasks, X-Slim advances the speed-quality frontier. On FLUX.1-dev and HunyuanVideo, it reduces latency by up to 4.97x and 3.52x with minimal perceptual loss. On DiT-XL/2, it reaches 3.13x acceleration and improves FID by 2.42 over prior methods.