GalaxyDiT: Efficient Video Generation with Guidance Alignment and Adaptive Proxy in Diffusion Transformers

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based video generation suffers from prohibitive computational overhead due to high iteration counts and classifier-free guidance (CFG), severely hindering practical deployment. To address this, we propose a training-free acceleration framework: (1) a guidance alignment strategy that harmonizes generative directions across varying step sizes, and (2) an adaptive proxy selection mechanism grounded in rank correlation analysis, enabling optimal cross-model-family computation reuse within diffusion Transformers—thereby circumventing CFG’s intrinsic computational bottleneck. Evaluated on Wan2.1-1.3B and 14B models, our method achieves 1.87× and 2.37× inference speedup, respectively, with only marginal VBench-2.0 performance degradation (−0.97% and −0.72%). Moreover, PSNR improves by 5–10 dB over state-of-the-art methods, demonstrating unprecedented efficiency-quality trade-off balance.

Technology Category

Application Category

📝 Abstract
Diffusion models have revolutionized video generation, becoming essential tools in creative content generation and physical simulation. Transformer-based architectures (DiTs) and classifier-free guidance (CFG) are two cornerstones of this success, enabling strong prompt adherence and realistic video quality. Despite their versatility and superior performance, these models require intensive computation. Each video generation requires dozens of iterative steps, and CFG doubles the required compute. This inefficiency hinders broader adoption in downstream applications. We introduce GalaxyDiT, a training-free method to accelerate video generation with guidance alignment and systematic proxy selection for reuse metrics. Through rank-order correlation analysis, our technique identifies the optimal proxy for each video model, across model families and parameter scales, thereby ensuring optimal computational reuse. We achieve $1.87 imes$ and $2.37 imes$ speedup on Wan2.1-1.3B and Wan2.1-14B with only 0.97% and 0.72% drops on the VBench-2.0 benchmark. At high speedup rates, our approach maintains superior fidelity to the base model, exceeding prior state-of-the-art approaches by 5 to 10 dB in peak signal-to-noise ratio (PSNR).
Problem

Research questions and friction points this paper is trying to address.

Accelerates video generation by reducing computational inefficiency
Optimizes proxy selection for computational reuse across models
Maintains high fidelity and speedup without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free acceleration with guidance alignment
Systematic proxy selection for computational reuse
Optimal proxy identification across model families
🔎 Similar Papers
No similar papers found.