PhysVideo: Physically Plausible Video Generation with Cross-View Geometry Guidance

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing video generation methods struggle to model realistic three-dimensional physical motion due to their reliance on viewpoint-limited two-dimensional projections. To overcome this limitation, the authors propose a two-stage generative framework: first, Phys4View generates physically aware orthogonal foreground videos, which are then composited with background context by VideoSyn to produce complete scenes. Key innovations include a physics-aware attention mechanism, a geometry-enhanced cross-view attention module, and PhysMV—the first large-scale, multi-view synchronized video dataset comprising 160K sequences. Experimental results demonstrate that the proposed approach significantly outperforms current methods in terms of physical plausibility and spatiotemporal consistency.

Technology Category

Application Category

📝 Abstract
Recent progress in video generation has led to substantial improvements in visual fidelity, yet ensuring physically consistent motion remains a fundamental challenge. Intuitively, this limitation can be attributed to the fact that real-world object motion unfolds in three-dimensional space, while video observations provide only partial, view-dependent projections of such dynamics. To address these issues, we propose PhysVideo, a two-stage framework that first generates physics-aware orthogonal foreground videos and then synthesizes full videos with background. In the first stage, Phys4View leverages physics-aware attention to capture the influence of physical attributes on motion dynamics, and enhances spatio-temporal consistency by incorporating geometry-enhanced cross-view attention and temporal attention. In the second stage, VideoSyn uses the generated foreground videos as guidance and learns the interactions between foreground dynamics and background context for controllable video synthesis. To support training, we construct PhysMV, a dataset containing 40K scenes, each consisting of four orthogonal viewpoints, resulting in a total of 160K video sequences. Extensive experiments demonstrate that PhysVideo significantly improves physical realism and spatial-temporal coherence over existing video generation methods. Home page: https://anonymous.4open.science/w/Phys4D/.
Problem

Research questions and friction points this paper is trying to address.

physically consistent motion
video generation
3D motion dynamics
view-dependent projection
spatio-temporal coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

physically plausible video generation
cross-view geometry
physics-aware attention
spatio-temporal consistency
multi-view video synthesis
🔎 Similar Papers
No similar papers found.