🤖 AI Summary
To address motion discontinuity and poor text-video alignment in video generation, this work constructs a large-scale, multi-dimensional human preference dataset and proposes VideoReward—the first video-specific, multi-dimensional reward model. It also introduces human feedback into flow-matching-based video generation for the first time. We further design three reinforcement learning (RL) alignment algorithms: Flow-DPO, a training-time optimization method based on direct preference optimization tailored to flow models; Flow-RWR, which employs reward-weighted regression; and Flow-NRG, an inference-time mechanism enabling multi-objective, interpretable quality control via noise-level reward guidance. Flow-DPO is the first DPO variant adapted to flow models, while Flow-NRG introduces the novel concept of noise-level reward guidance. Experiments demonstrate that VideoReward significantly outperforms existing video reward models; Flow-DPO surpasses supervised fine-tuning and Flow-RWR in motion coherence and semantic consistency; and Flow-NRG enables flexible, user-controllable, and interpretable quality modulation.
📝 Abstract
Video generation has achieved significant advances through rectified flow techniques, but issues like unsmooth motion and misalignment between videos and prompts persist. In this work, we develop a systematic pipeline that harnesses human feedback to mitigate these problems and refine the video generation model. Specifically, we begin by constructing a large-scale human preference dataset focused on modern video generation models, incorporating pairwise annotations across multi-dimensions. We then introduce VideoReward, a multi-dimensional video reward model, and examine how annotations and various design choices impact its rewarding efficacy. From a unified reinforcement learning perspective aimed at maximizing reward with KL regularization, we introduce three alignment algorithms for flow-based models by extending those from diffusion models. These include two training-time strategies: direct preference optimization for flow (Flow-DPO) and reward weighted regression for flow (Flow-RWR), and an inference-time technique, Flow-NRG, which applies reward guidance directly to noisy videos. Experimental results indicate that VideoReward significantly outperforms existing reward models, and Flow-DPO demonstrates superior performance compared to both Flow-RWR and standard supervised fine-tuning methods. Additionally, Flow-NRG lets users assign custom weights to multiple objectives during inference, meeting personalized video quality needs. Project page: https://gongyeliu.github.io/videoalign.