PhyRPR: Training-Free Physics-Constrained Video Generation

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing video generation models struggle to satisfy physical constraints due to the tight coupling of physical reasoning and visual synthesis within a single stage. To overcome this limitation, the authors propose PhyRPR, a three-stage framework that explicitly decouples these processes without requiring additional training. First, PhyReason performs physical state inference and generates keyframes; next, PhyPlan constructs a controllable coarse-grained motion skeleton; finally, PhyRefine injects this skeleton into the diffusion sampling process via a latent fusion strategy, preserving physical dynamics while enhancing visual fidelity. By integrating large language multimodal models, image generators, deterministic motion planning, and diffusion models, PhyRPR significantly improves both the physical plausibility and motion controllability of generated videos across diverse physics-constrained scenarios.

Technology Category

Application Category

📝 Abstract
Recent diffusion-based video generation models can synthesize visually plausible videos, yet they often struggle to satisfy physical constraints. A key reason is that most existing approaches remain single-stage: they entangle high-level physical understanding with low-level visual synthesis, making it hard to generate content that require explicit physical reasoning. To address this limitation, we propose a training-free three-stage pipeline,\textit{PhyRPR}:\textit{Phy\uline{R}eason}--\textit{Phy\uline{P}lan}--\textit{Phy\uline{R}efine}, which decouples physical understanding from visual synthesis. Specifically, \textit{PhyReason} uses a large multimodal model for physical state reasoning and an image generator for keyframe synthesis; \textit{PhyPlan} deterministically synthesizes a controllable coarse motion scaffold; and \textit{PhyRefine} injects this scaffold into diffusion sampling via a latent fusion strategy to refine appearance while preserving the planned dynamics. This staged design enables explicit physical control during generation. Extensive experiments under physics constraints show that our method consistently improves physical plausibility and motion controllability.
Problem

Research questions and friction points this paper is trying to address.

physics-constrained video generation
physical plausibility
diffusion models
video synthesis
physical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
physics-constrained generation
diffusion models
motion planning
multimodal reasoning
🔎 Similar Papers
No similar papers found.
Y
Yibo Zhao
State Key Lab of CAD&CG, Zhejiang University
Hengjia Li
Hengjia Li
Zhejiang University
image generationvideo generation
Xiaofei He
Xiaofei He
Professor of Computer Science, Zhejiang University
machine learningcomputer visiondata mining
B
Boxi Wu
State Key Lab of CAD&CG, Zhejiang University