Predict to Skip: Linear Multistep Feature Forecasting for Efficient Diffusion Transformers

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of iterative denoising in Diffusion Transformers (DiTs) and the degradation in generation quality caused by existing training-free acceleration methods, which often suffer from feature drift. The authors propose PrediT, a novel framework that, for the first time, integrates linear multistep numerical methods into diffusion model acceleration by predicting future latent features for efficient reuse. Its key innovations include a dynamic step-size modulation mechanism based on feature variation rates and a lightweight corrector activated in highly dynamic regions to effectively suppress error accumulation. Evaluated across multiple DiT-based image and video generation models, PrediT achieves up to 5.54× inference speedup without any additional training while preserving near-lossless generation quality.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiT) have emerged as a widely adopted backbone for high-fidelity image and video generation, yet their iterative denoising process incurs high computational costs. Existing training-free acceleration methods rely on feature caching and reuse under the assumption of temporal stability. However, reusing features for multiple steps may lead to latent drift and visual degradation. We observe that model outputs evolve smoothly along much of the diffusion trajectory, enabling principled predictions rather than naive reuse. Based on this insight, we propose \textbf{PrediT}, a training-free acceleration framework that formulates feature prediction as a linear multistep problem. We employ classical linear multistep methods to forecast future model outputs from historical information, combined with a corrector that activates in high-dynamics regions to prevent error accumulation. A dynamic step modulation mechanism adaptively adjusts the prediction horizon by monitoring the feature change rate. Together, these components enable substantial acceleration while preserving generation fidelity. Extensive experiments validate that our method achieves up to $5.54\times$ latency reduction across various DiT-based image and video generation models, while incurring negligible quality degradation.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformers
computational cost
feature reuse
latent drift
generation fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion Transformers
linear multistep methods
training-free acceleration
feature forecasting
dynamic step modulation
🔎 Similar Papers
No similar papers found.
H
Hanshuai Cui
1School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China; 2Institute of Artificial Intelligence and Future Networks, Beijing Normal University, Zhuhai 519087, China
Zhiqing Tang
Zhiqing Tang
Associate Professor, Beijing Normal University
Edge ComputingEdge AI SystemsContainerReinforcement Learning
Qianli Ma
Qianli Ma
Professor of South China University of Technology
Time Series ModellingMachine LearningNatural Language Processing
Z
Zhi Yao
1School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China; 2Institute of Artificial Intelligence and Future Networks, Beijing Normal University, Zhuhai 519087, China
Weijia Jia
Weijia Jia
FIEEE, Chair Professor, Beijing Normal University and UIC
Cyber Intelligent ComputingNetworking