Physical Simulator In-the-Loop Video Generation

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common issue in AI-generated videos where motion often violates physical laws, leading to temporal inconsistencies. To enhance physical plausibility and spatiotemporal coherence, the authors propose a novel framework that integrates a physics simulator into the diffusion-based video generation pipeline. The method first reconstructs 4D scenes and object meshes, then executes physics simulations to obtain physically valid trajectories, which are used in a closed-loop manner to guide the generation process. Additionally, a test-time texture consistency optimization (TTCO) technique is introduced, leveraging pixel-wise correspondence to adjust features and improve visual continuity during motion. Experimental results demonstrate that the proposed approach significantly improves physical realism and temporal consistency while preserving high-quality output and generative diversity, outperforming existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion-based video generation have achieved remarkable visual realism but still struggle to obey basic physical laws such as gravity, inertia, and collision. Generated objects often move inconsistently across frames, exhibit implausible dynamics, or violate physical constraints, limiting the realism and reliability of AI-generated videos. We address this gap by introducing Physical Simulator In-the-loop Video Generation (PSIVG), a novel framework that integrates a physical simulator into the video diffusion process. Starting from a template video generated by a pre-trained diffusion model, PSIVG reconstructs the 4D scene and foreground object meshes, initializes them within a physical simulator, and generates physically consistent trajectories. These simulated trajectories are then used to guide the video generator toward spatio-temporally physically coherent motion. To further improve texture consistency during object movement, we propose a Test-Time Texture Consistency Optimization (TTCO) technique that adapts text and feature embeddings based on pixel correspondences from the simulator. Comprehensive experiments demonstrate that PSIVG produces videos that better adhere to real-world physics while preserving visual quality and diversity. Project Page: https://vcai.mpi-inf.mpg.de/projects/PSIVG/
Problem

Research questions and friction points this paper is trying to address.

physical consistency
video generation
diffusion models
physics violation
spatio-temporal coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physical Simulation
Diffusion-based Video Generation
4D Scene Reconstruction
Texture Consistency Optimization
Physics-aware Generation
🔎 Similar Papers
No similar papers found.