🤖 AI Summary
Current video generation models frequently violate physical laws. To address this, we propose a training-free, plug-and-in iterative self-optimization framework that integrates vision-language models (VLMs) and large language models (LLMs) to establish a multimodal chain-of-thought (MM-CoT) mechanism. This enables physics-aware dynamic prompt refinement and closed-loop correction of generated videos. Our key contribution is the first introduction of multimodal chain-of-thought reasoning for detecting physical inconsistencies and reconstructing prompts—enabling zero-shot, model-agnostic adaptation to diverse video generation architectures. Evaluated on the PhyIQ benchmark, our method elevates the Physics-IQ score from 56.31 to 62.38, demonstrating significant and generalizable improvements in physical plausibility of generated videos.
📝 Abstract
Recent progress in video generation has led to impressive visual quality, yet current models still struggle to produce results that align with real-world physical principles. To this end, we propose an iterative self-refinement framework that leverages large language models and vision-language models to provide physics-aware guidance for video generation. Specifically, we introduce a multimodal chain-of-thought (MM-CoT) process that refines prompts based on feedback from physical inconsistencies, progressively enhancing generation quality. This method is training-free and plug-and-play, making it readily applicable to a wide range of video generation models. Experiments on the PhyIQ benchmark show that our method improves the Physics-IQ score from 56.31 to 62.38. We hope this work serves as a preliminary exploration of physics-consistent video generation and may offer insights for future research.