Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited representational capacity of intermediate features in video diffusion Transformers—which constrains generation quality—this paper proposes Align4Gen. Our method integrates self-supervised vision encoders (e.g., DINOv2, MAE) to extract hierarchical spatiotemporal features, and introduces a cross-scale feature alignment loss alongside a progressive multi-feature fusion mechanism to inject externally derived supervision signals with strong discriminability and high temporal consistency into the diffusion process. To guide optimal external feature source selection, we design a dual-dimensional evaluation metric balancing discriminability and temporal consistency. Experiments demonstrate significant improvements in both unconditional and class-conditional video generation: on UCF101 and Kinetics, our approach reduces Fréchet Video Distance (FVD) by 12.3% and increases Inception Score (IS) by 18.7%, outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Video diffusion models have advanced rapidly in the recent years as a result of series of architectural innovations (e.g., diffusion transformers) and use of novel training objectives (e.g., flow matching). In contrast, less attention has been paid to improving the feature representation power of such models. In this work, we show that training video diffusion models can benefit from aligning the intermediate features of the video generator with feature representations of pre-trained vision encoders. We propose a new metric and conduct an in-depth analysis of various vision encoders to evaluate their discriminability and temporal consistency, thereby assessing their suitability for video feature alignment. Based on the analysis, we present Align4Gen which provides a novel multi-feature fusion and alignment method integrated into video diffusion model training. We evaluate Align4Gen both for unconditional and class-conditional video generation tasks and show that it results in improved video generation as quantified by various metrics. Full video results are available on our project page: https://align4gen.github.io/align4gen/
Problem

Research questions and friction points this paper is trying to address.

Enhancing video diffusion model training via feature alignment
Improving feature representation in video generation models
Integrating multi-feature fusion from pre-trained vision encoders
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-feature fusion from vision encoders
Feature alignment during diffusion training
Self-supervised encoder integration for generation
🔎 Similar Papers
2024-02-20International Conference on Machine LearningCitations: 30