🤖 AI Summary
This work addresses the high computational cost of iterative denoising in Diffusion Transformers (DiTs) and the degradation in generation quality caused by existing training-free acceleration methods, which often suffer from feature drift. The authors propose PrediT, a novel framework that, for the first time, integrates linear multistep numerical methods into diffusion model acceleration by predicting future latent features for efficient reuse. Its key innovations include a dynamic step-size modulation mechanism based on feature variation rates and a lightweight corrector activated in highly dynamic regions to effectively suppress error accumulation. Evaluated across multiple DiT-based image and video generation models, PrediT achieves up to 5.54× inference speedup without any additional training while preserving near-lossless generation quality.
📝 Abstract
Diffusion Transformers (DiT) have emerged as a widely adopted backbone for high-fidelity image and video generation, yet their iterative denoising process incurs high computational costs. Existing training-free acceleration methods rely on feature caching and reuse under the assumption of temporal stability. However, reusing features for multiple steps may lead to latent drift and visual degradation. We observe that model outputs evolve smoothly along much of the diffusion trajectory, enabling principled predictions rather than naive reuse. Based on this insight, we propose \textbf{PrediT}, a training-free acceleration framework that formulates feature prediction as a linear multistep problem. We employ classical linear multistep methods to forecast future model outputs from historical information, combined with a corrector that activates in high-dynamics regions to prevent error accumulation. A dynamic step modulation mechanism adaptively adjusts the prediction horizon by monitoring the feature change rate. Together, these components enable substantial acceleration while preserving generation fidelity. Extensive experiments validate that our method achieves up to $5.54\times$ latency reduction across various DiT-based image and video generation models, while incurring negligible quality degradation.