🤖 AI Summary
Existing video generation models often violate physical laws, leading to unreliable outputs in real-world scenarios. To address this, this work proposes a physics-aware local conditioning mechanism that injects fine-grained descriptions of physical states, interactions, and constraints into short video clip segments. A block-aware cross-attention module is designed to effectively fuse these local signals with global prompts, while negative physics prompts are introduced during inference to suppress physically implausible dynamics. This approach uniquely combines locally grounded, physics-informed conditioning with negative prompt guidance, significantly enhancing the physical plausibility of generated videos. Experimental results demonstrate substantial improvements on the VideoPhy and VideoPhy2 benchmarks, with gains of approximately 33% and 8% in physical commonsense scores, respectively.
📝 Abstract
Generative video models achieve high visual fidelity but often violate basic physical principles, limiting reliability in real-world settings. Prior attempts to inject physics rely on conditioning: frame-level signals are domain-specific and short-horizon, while global text prompts are coarse and noisy, missing fine-grained dynamics. We present PhysVid, a physics-aware local conditioning scheme that operates over temporally contiguous chunks of frames. Each chunk is annotated with physics-grounded descriptions of states, interactions, and constraints, which are fused with the global prompt via chunk-aware cross-attention during training. At inference, we introduce negative physics prompts (descriptions of locally relevant law violations) to steer generation away from implausible trajectories. On VideoPhy, PhysVid improves physical commonsense scores by $\approx 33\%$ over baseline video generators, and by up to $\approx 8\%$ on VideoPhy2. These results show that local, physics-aware guidance substantially increases physical plausibility in generative video and marks a step toward physics-grounded video models.