π€ AI Summary
Existing feature caching methods struggle to preserve DiT generation quality under high acceleration ratios, primarily due to error accumulation from large-step predictions. This work proposes FoCa, the first framework to model hidden feature evolution as an ordinary differential equation (ODE) trajectory and reformulate feature caching as an ODE solving problem. FoCa introduces a training-free prediction-correction mechanism that stably reuses and refines historical features even under aggressive step-skipping, effectively suppressing error propagation. Evaluated on image and video generation tasks, FoCa achieves lossless acceleration: 5.50Γ for FLUX, 6.45Γ for HunyuanVideo, and 4.53Γ for DiTβreaching a peak speedup of 6.45Γ without compromising generation fidelity. The method significantly enhances the inference efficiency of Diffusion Transformers while maintaining perceptual and quantitative quality.
π Abstract
Diffusion Transformers (DiTs) have demonstrated exceptional performance in high-fidelity image and video generation. To reduce their substantial computational costs, feature caching techniques have been proposed to accelerate inference by reusing hidden representations from previous timesteps. However, current methods often struggle to maintain generation quality at high acceleration ratios, where prediction errors increase sharply due to the inherent instability of long-step forecasting. In this work, we adopt an ordinary differential equation (ODE) perspective on the hidden-feature sequence, modeling layer representations along the trajectory as a feature-ODE. We attribute the degradation of existing caching strategies to their inability to robustly integrate historical features under large skipping intervals. To address this, we propose FoCa (Forecast-then-Calibrate), which treats feature caching as a feature-ODE solving problem. Extensive experiments on image synthesis, video generation, and super-resolution tasks demonstrate the effectiveness of FoCa, especially under aggressive acceleration. Without additional training, FoCa achieves near-lossless speedups of 5.50 times on FLUX, 6.45 times on HunyuanVideo, 3.17 times on Inf-DiT, and maintains high quality with a 4.53 times speedup on DiT.