Bootstrapping Physics-Grounded Video Generation through VLM-Guided Iterative Self-Refinement

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video generation models frequently violate physical laws. To address this, we propose a training-free, plug-and-in iterative self-optimization framework that integrates vision-language models (VLMs) and large language models (LLMs) to establish a multimodal chain-of-thought (MM-CoT) mechanism. This enables physics-aware dynamic prompt refinement and closed-loop correction of generated videos. Our key contribution is the first introduction of multimodal chain-of-thought reasoning for detecting physical inconsistencies and reconstructing prompts—enabling zero-shot, model-agnostic adaptation to diverse video generation architectures. Evaluated on the PhyIQ benchmark, our method elevates the Physics-IQ score from 56.31 to 62.38, demonstrating significant and generalizable improvements in physical plausibility of generated videos.

Technology Category

Application Category

📝 Abstract
Recent progress in video generation has led to impressive visual quality, yet current models still struggle to produce results that align with real-world physical principles. To this end, we propose an iterative self-refinement framework that leverages large language models and vision-language models to provide physics-aware guidance for video generation. Specifically, we introduce a multimodal chain-of-thought (MM-CoT) process that refines prompts based on feedback from physical inconsistencies, progressively enhancing generation quality. This method is training-free and plug-and-play, making it readily applicable to a wide range of video generation models. Experiments on the PhyIQ benchmark show that our method improves the Physics-IQ score from 56.31 to 62.38. We hope this work serves as a preliminary exploration of physics-consistent video generation and may offer insights for future research.
Problem

Research questions and friction points this paper is trying to address.

Video generation models struggle with real-world physical principles
Lack physics-aware guidance for generating physically consistent videos
Need training-free methods to enhance physics alignment in generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative self-refinement framework for video generation
Multimodal chain-of-thought process refines prompts
Training-free plug-and-play physics-aware guidance
🔎 Similar Papers
No similar papers found.