PhysAlign: Physics-Coherent Image-to-Video Generation through Feature and 3D Representation Alignment

๐Ÿ“… 2026-03-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video diffusion models often violate fundamental physical intuitions due to a lack of physical consistency, limiting their practical applicability. To address this, this work proposes the first unified physical latent space that integrates explicit 3D physical annotations with priors from video foundation models. By constructing a synthetic data pipeline grounded in rigid-body simulation and incorporating 3D geometric constraints alongside Gram matrixโ€“driven spatiotemporal alignment, the approach effectively mitigates the scarcity of physically annotated video data. The resulting method significantly outperforms current approaches in complex physical reasoning and temporal stability while preserving high-quality zero-shot visual generation capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
Video Diffusion Models (VDMs) offer a promising approach for simulating dynamic scenes and environments, with broad applications in robotics and media generation. However, existing models often generate temporally incoherent content that violates basic physical intuition, significantly limiting their practical applicability. We propose PhysAlign, an efficient framework for physics-coherent image-to-video (I2V) generation that explicitly addresses this limitation. To overcome the critical scarcity of physics-annotated videos, we first construct a fully controllable synthetic data generation pipeline based on rigid-body simulation, yielding a highly-curated dataset with accurate, fine-grained physics and 3D annotations. Leveraging this data, PhysAlign constructs a unified physical latent space by coupling explicit 3D geometry constraints with a Gram-based spatio-temporal relational alignment that extracts kinematic priors from video foundation models. Extensive experiments demonstrate that PhysAlign significantly outperforms existing VDMs on tasks requiring complex physical reasoning and temporal stability, without compromising zero-shot visual quality. PhysAlign shows the potential to bridge the gap between raw visual synthesis and rigid-body kinematics, establishing a practical paradigm for genuinely physics-grounded video generation. The project page is available at https://physalign.github.io/PhysAlign.
Problem

Research questions and friction points this paper is trying to address.

temporal coherence
physical plausibility
video diffusion models
physics-coherent generation
image-to-video
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Coherent Video Generation
3D Representation Alignment
Video Diffusion Models
Synthetic Physics Dataset
Spatio-Temporal Relational Alignment
๐Ÿ”Ž Similar Papers
No similar papers found.