ZEUS: Accelerating Diffusion Models with Only Second-Order Predictor

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of diffusion models by proposing ZEUS, a training-free acceleration method that achieves stable and efficient sampling without requiring cached states, architectural modifications, or additional training. ZEUS leverages a second-order predictor combined with an interleaved step-skipping strategy to substantially reduce the number of denoiser evaluations while avoiding the accumulated errors typically caused by consecutive extrapolation. The approach is compatible with diverse backbone architectures and solvers, and demonstrates robust performance under structural sparsity constraints. In image and video generation tasks, ZEUS achieves up to 3.2× end-to-end speedup over baseline methods while preserving high perceptual quality, significantly outperforming existing training-free acceleration techniques.
📝 Abstract
Denoising generative models deliver high-fidelity generation but remain bottlenecked by inference latency due to the many iterative denoiser calls required during sampling. Training-free acceleration methods reduce latency by either sparsifying the model architecture or shortening the sampling trajectory. Current training-free acceleration methods are more complex than necessary: higher-order predictors amplify error under aggressive speedups, and architectural modifications hinder deployment. Beyond 2x acceleration, step skipping creates structural scarcity -- at most one fresh evaluation per local window -- leaving the computed output and its backward difference as the only causally grounded information. Based on this, we propose ZEUS, an acceleration method that predicts reduced denoiser evaluations using a second-order predictor, and stabilizes aggressive consecutive skipping with an interleaved scheme that avoids back-to-back extrapolations. ZEUS adds essentially zero overhead, no feature caches, and no architectural modifications, and it is compatible with different backbones, prediction objectives, and solver choices. Across image and video generation, ZEUS consistently improves the speed-fidelity performance over recent training-free baselines, achieving up to 3.2x end-to-end speedup while maintaining perceptual quality. Our code is available at: https://github.com/Ting-Justin-Jiang/ZEUS.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
inference latency
training-free acceleration
sampling trajectory
denoiser evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion acceleration
second-order predictor
training-free
step skipping
denoising generative models
🔎 Similar Papers
No similar papers found.