🤖 AI Summary
This work addresses the “motion–visual quality trade-off” in video generation, where high-quality, dynamically rich “golden data” are scarce. To overcome this limitation, the authors propose a timesteps-aware quality disentanglement (TQD) mechanism that analyzes the learning dynamics of diffusion models across different timesteps to decouple and selectively sample training data—separating visually high-fidelity but motion-weak samples from motion-rich but low-quality ones. Notably, TQD does not require perfectly paired data; instead, it leverages unbalanced, disjoint datasets to outperform conventional training on curated high-quality data. Moreover, when applied to standard high-quality benchmarks, TQD further enhances generation performance, demonstrating its generality and effectiveness in improving both motion complexity and visual fidelity.
📝 Abstract
Recent advances in video generation models have achieved impressive results. However, these models heavily rely on the use of high-quality data that combines both high visual quality and high motion quality. In this paper, we identify a key challenge in video data curation: the Motion-Vision Quality Dilemma. We discovered that visual quality and motion intensity inherently exhibit a negative correlation, making it hard to obtain golden data that excels in both aspects. To address this challenge, we first examine the hierarchical learning dynamics of video diffusion models and conduct gradient-based analysis on quality-degraded samples. We discover that quality-imbalanced data can produce gradients similar to golden data at appropriate timesteps. Based on this, we introduce the novel concept of Timestep selection in Training Process. We propose Timestep-aware Quality Decoupling (TQD), which modifies the data sampling distribution to better match the model's learning process. For certain types of data, the sampling distribution is skewed toward higher timesteps for motion-rich data, while high visual quality data is more likely to be sampled during lower timesteps. Through extensive experiments, we demonstrate that TQD enables training exclusively on separated imbalanced data to achieve performance surpassing conventional training with better data, challenging the necessity of perfect data in video generation. Moreover, our method also boosts model performance when trained on high-quality data, showcasing its effectiveness across different data scenarios.