๐ค AI Summary
Existing video diffusion models often violate fundamental physical intuitions due to a lack of physical consistency, limiting their practical applicability. To address this, this work proposes the first unified physical latent space that integrates explicit 3D physical annotations with priors from video foundation models. By constructing a synthetic data pipeline grounded in rigid-body simulation and incorporating 3D geometric constraints alongside Gram matrixโdriven spatiotemporal alignment, the approach effectively mitigates the scarcity of physically annotated video data. The resulting method significantly outperforms current approaches in complex physical reasoning and temporal stability while preserving high-quality zero-shot visual generation capabilities.
๐ Abstract
Video Diffusion Models (VDMs) offer a promising approach for simulating dynamic scenes and environments, with broad applications in robotics and media generation. However, existing models often generate temporally incoherent content that violates basic physical intuition, significantly limiting their practical applicability. We propose PhysAlign, an efficient framework for physics-coherent image-to-video (I2V) generation that explicitly addresses this limitation. To overcome the critical scarcity of physics-annotated videos, we first construct a fully controllable synthetic data generation pipeline based on rigid-body simulation, yielding a highly-curated dataset with accurate, fine-grained physics and 3D annotations. Leveraging this data, PhysAlign constructs a unified physical latent space by coupling explicit 3D geometry constraints with a Gram-based spatio-temporal relational alignment that extracts kinematic priors from video foundation models. Extensive experiments demonstrate that PhysAlign significantly outperforms existing VDMs on tasks requiring complex physical reasoning and temporal stability, without compromising zero-shot visual quality. PhysAlign shows the potential to bridge the gap between raw visual synthesis and rigid-body kinematics, establishing a practical paradigm for genuinely physics-grounded video generation. The project page is available at https://physalign.github.io/PhysAlign.