π€ AI Summary
Existing video generation models lack an attribution mechanism to analyze the influence of motion dynamics in training data, making it difficult to disentangle motion from temporally static appearance. To address this limitation, this work proposes Motive, a novel framework that introduces gradient-based data attribution specifically tailored for motion attributes. By employing a motion-weighted loss mask, Motive decouples temporal dynamics from static appearance, enabling the identification of training clips that critically contribute to generated motion. These insights are then leveraged to optimize the fine-tuning dataset. Experimental results demonstrate that the proposed method significantly enhances motion smoothness and dynamic expressiveness on VBench, achieving a human preference win rate of 74.1% over the original pre-trained model.
π Abstract
Despite the rapid progress of video generation models, the role of data in influencing motion is poorly understood. We present Motive (MOTIon attribution for Video gEneration), a motion-centric, gradient-based data attribution framework that scales to modern, large, high-quality video datasets and models. We use this to study which fine-tuning clips improve or degrade temporal dynamics. Motive isolates temporal dynamics from static appearance via motion-weighted loss masks, yielding efficient and scalable motion-specific influence computation. On text-to-video models, Motive identifies clips that strongly affect motion and guides data curation that improves temporal consistency and physical plausibility. With Motive-selected high-influence data, our method improves both motion smoothness and dynamic degree on VBench, achieving a 74.1% human preference win rate compared with the pretrained base model. To our knowledge, this is the first framework to attribute motion rather than visual appearance in video generative models and to use it to curate fine-tuning data.