🤖 AI Summary
Text-to-video generation models frequently violate physical laws, resulting in motion distortions and incoherent object interactions—limiting their deployment in high-reliability applications such as embodied AI, robotics, and simulation. To address this, we propose PhysicsRM, the first dual-dimensional physics reward model that separately quantifies *intra-object stability* and *inter-object dynamical interaction*. Building upon it, we introduce PhyDPO—a physics-aware direct preference optimization framework that enables model-agnostic, scalable alignment with physical consistency. PhyDPO integrates a dual-reward mechanism, contrastive feedback learning, and physics-guided reweighting, and is compatible with both video diffusion models and video Transformers. Extensive experiments demonstrate that our framework significantly improves physical plausibility across multiple benchmarks while preserving visual quality and semantic fidelity. This work establishes a new paradigm for trustworthy, physically grounded video generation.
📝 Abstract
Recent advances in text-to-video generation have achieved impressive perceptual quality, yet generated content often violates fundamental principles of physical plausibility - manifesting as implausible object dynamics, incoherent interactions, and unrealistic motion patterns. Such failures hinder the deployment of video generation models in embodied AI, robotics, and simulation-intensive domains. To bridge this gap, we propose PhysCorr, a unified framework for modeling, evaluating, and optimizing physical consistency in video generation. Specifically, we introduce PhysicsRM, the first dual-dimensional reward model that quantifies both intra-object stability and inter-object interactions. On this foundation, we develop PhyDPO, a novel direct preference optimization pipeline that leverages contrastive feedback and physics-aware reweighting to guide generation toward physically coherent outputs. Our approach is model-agnostic and scalable, enabling seamless integration into a wide range of video diffusion and transformer-based backbones. Extensive experiments across multiple benchmarks demonstrate that PhysCorr achieves significant improvements in physical realism while preserving visual fidelity and semantic alignment. This work takes a critical step toward physically grounded and trustworthy video generation.