π€ AI Summary
Diffusion models suffer from high computational overhead due to iterative sampling, while existing feature caching methods incur accumulating errors from naive reuse: (i) feature shift error arising from inaccurate cached outputs, and (ii) step-amplification error caused by error propagation under fixed-step scheduling. This paper proposes the first error-aware feature caching framework tailored for diffusion models, which uniquely decouples caching error into two analyzable components. We introduce a trajectory-aware dynamic correction mechanism jointly optimized with a closed-form residual linearization model to jointly mitigate both errors. Leveraging offline residual analysis, adaptive integration interval adjustment, and closed-loop modeling, our method significantly improves caching fidelity. Evaluated on image and video generation tasks, it achieves up to 2Γ speedup without compromising visual qualityβon Wan2.1, VBench scores remain nearly lossless. The framework thus bridges efficiency and perceptual fidelity in diffusion-based generation.
π Abstract
Diffusion models suffer from substantial computational overhead due to their inherently iterative inference process. While feature caching offers a promising acceleration strategy by reusing intermediate outputs across timesteps, naive reuse often incurs noticeable quality degradation. In this work, we formally analyze the cumulative error introduced by caching and decompose it into two principal components: feature shift error, caused by inaccuracies in cached outputs, and step amplification error, which arises from error propagation under fixed timestep schedules. To address these issues, we propose ERTACache, a principled caching framework that jointly rectifies both error types. Our method employs an offline residual profiling stage to identify reusable steps, dynamically adjusts integration intervals via a trajectory-aware correction coefficient, and analytically approximates cache-induced errors through a closed-form residual linearization model. Together, these components enable accurate and efficient sampling under aggressive cache reuse. Extensive experiments across standard image and video generation benchmarks show that ERTACache achieves up to 2x inference speedup while consistently preserving or even improving visual quality. Notably, on the state-of-the-art Wan2.1 video diffusion model, ERTACache delivers 2x acceleration with minimal VBench degradation, effectively maintaining baseline fidelity while significantly improving efficiency. The code is available at https://github.com/bytedance/ERTACache.