🤖 AI Summary
Video generation has advanced significantly in visual fidelity, yet physical consistency remains severely lacking. Existing preference-based optimization methods rely either on costly human annotations or unreliable reward models. To address this, we propose RDPO—a label-free preference optimization framework. RDPO leverages reverse sampling from a pre-trained generator on real-world videos to construct preference pairs discriminable by physical correctness. Through statistical distinguishability analysis and multi-stage iterative training, it unsupervisedly distills physical priors from authentic dynamic data, enhancing motion coherence and physical plausibility. RDPO is the first method enabling physics-aware preference learning without human annotations—requiring only raw real videos. Extensive evaluations across multiple benchmarks and human studies demonstrate that RDPO substantially improves both physical reasonableness and temporal consistency of generated videos, validating its effectiveness and generalizability.
📝 Abstract
Video generation techniques have achieved remarkable advancements in visual quality, yet faithfully reproducing real-world physics remains elusive. Preference-based model post-training may improve physical consistency, but requires costly human-annotated datasets or reward models that are not yet feasible. To address these challenges, we present Real Data Preference Optimisation (RDPO), an annotation-free framework that distills physical priors directly from real-world videos. Specifically, the proposed RDPO reverse-samples real video sequences with a pre-trained generator to automatically build preference pairs that are statistically distinguishable in terms of physical correctness. A multi-stage iterative training schedule then guides the generator to obey physical laws increasingly well. Benefiting from the dynamic information explored from real videos, our proposed RDPO significantly improves the action coherence and physical realism of the generated videos. Evaluations on multiple benchmarks and human evaluations have demonstrated that RDPO achieves improvements across multiple dimensions. The source code and demonstration of this paper are available at: https://wwenxu.github.io/RDPO/