🤖 AI Summary
Existing diffusion-based video generation models produce visually realistic outputs but frequently violate physical laws, as they implicitly learn from text-video data without explicit physical constraints—leading to high training costs and limited generalization. This paper proposes a training-free, physics-aware inference framework: it identifies physically implausible motions via counterfactual physical violation prompting, then applies synchronized directional normalization and trajectory-decoupled denoising during the diffusion sampling process for plug-and-play physical correction. The method establishes a lightweight physics reasoning pipeline that significantly improves video fidelity across diverse physical scenarios—including gravity, collision, and inertia—without compromising visual realism. Ablation studies validate the efficacy of each component. Our approach introduces a scalable, training-free paradigm for enhancing the physical plausibility of generative video models.
📝 Abstract
Diffusion models can generate realistic videos, but existing methods rely on implicitly learning physical reasoning from large-scale text-video datasets, which is costly, difficult to scale, and still prone to producing implausible motions that violate fundamental physical laws. We introduce a training-free framework that improves physical plausibility at inference time by explicitly reasoning about implausibility and guiding the generation away from it. Specifically, we employ a lightweight physics-aware reasoning pipeline to construct counterfactual prompts that deliberately encode physics-violating behaviors. Then, we propose a novel Synchronized Decoupled Guidance (SDG) strategy, which leverages these prompts through synchronized directional normalization to counteract lagged suppression and trajectory-decoupled denoising to mitigate cumulative trajectory bias, ensuring that implausible content is suppressed immediately and consistently throughout denoising. Experiments across different physical domains show that our approach substantially enhances physical fidelity while maintaining photorealism, despite requiring no additional training. Ablation studies confirm the complementary effectiveness of both the physics-aware reasoning component and SDG. In particular, the aforementioned two designs of SDG are also individually validated to contribute critically to the suppression of implausible content and the overall gains in physical plausibility. This establishes a new and plug-and-play physics-aware paradigm for video generation.