Think Before You Diffuse: LLMs-Guided Physics-Aware Video Generation

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physical plausibility failures—such as implausible motion, incorrect collision responses, and inconsistent gravity—remain a key bottleneck in diffusion-based video generation. To address this, we propose DiffPhy, the first framework to explicitly model physical semantics from text prompts using large language models (LLMs) and guide pre-trained video diffusion models via LLM-driven physical reasoning. Methodologically, DiffPhy introduces a multi-modal supervised joint optimization scheme, constructs the first high-fidelity physical-action video dataset, and enforces a dual constraint balancing physical correctness and text-semantic alignment. Extensive experiments demonstrate that DiffPhy achieves state-of-the-art performance across multiple physics-aware benchmarks, significantly improving the generation quality of critical physical attributes—including motion plausibility, collision fidelity, and gravitational consistency—while preserving textual fidelity.

Technology Category

Application Category

📝 Abstract
Recent video diffusion models have demonstrated their great capability in generating visually-pleasing results, while synthesizing the correct physical effects in generated videos remains challenging. The complexity of real-world motions, interactions, and dynamics introduce great difficulties when learning physics from data. In this work, we propose DiffPhy, a generic framework that enables physically-correct and photo-realistic video generation by fine-tuning a pre-trained video diffusion model. Our method leverages large language models (LLMs) to explicitly reason a comprehensive physical context from the text prompt and use it to guide the generation. To incorporate physical context into the diffusion model, we leverage a Multimodal large language model (MLLM) as a supervisory signal and introduce a set of novel training objectives that jointly enforce physical correctness and semantic consistency with the input text. We also establish a high-quality physical video dataset containing diverse phyiscal actions and events to facilitate effective finetuning. Extensive experiments on public benchmarks demonstrate that DiffPhy is able to produce state-of-the-art results across diverse physics-related scenarios. Our project page is available at https://bwgzk-keke.github.io/DiffPhy/
Problem

Research questions and friction points this paper is trying to address.

Enables physically-correct video generation via diffusion models
Uses LLMs to reason physical context from text prompts
Incorporates MLLM supervision for physics and semantic consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs guide physical context reasoning
MLLM enforces physical correctness objectives
High-quality physics video dataset for finetuning
🔎 Similar Papers
No similar papers found.