๐ค AI Summary
Existing video generation methods predominantly operate in the 2D pixel space without explicit 3D structural constraints, leading to geometric temporal inconsistency, physically implausible motion, and structural artifacts. To address this, we propose a latent-space geometry-aware video generation framework built upon latent diffusion models. Our method integrates a frame-wise depth prediction module and introduces a multi-view geometric loss that aligns predicted depth maps across frames within a shared 3D coordinate systemโenabling joint optimization of appearance synthesis and 3D structural modeling. Leveraging a diffusion Transformer architecture, we unify a depth prediction network with an image-level latent encoder and impose latent-space depth regularization. Extensive experiments demonstrate that our approach significantly improves geometric consistency, temporal stability, and physical plausibility of generated videos across multiple benchmarks, outperforming current state-of-the-art methods.
๐ Abstract
Recent advances in video generation have enabled the synthesis of high-quality and visually realistic clips using diffusion transformer models. However, most existing approaches operate purely in the 2D pixel space and lack explicit mechanisms for modeling 3D structures, often resulting in temporally inconsistent geometries, implausible motions, and structural artifacts. In this work, we introduce geometric regularization losses into video generation by augmenting latent diffusion models with per-frame depth prediction. We adopted depth as the geometric representation because of the great progress in depth prediction and its compatibility with image-based latent encoders. Specifically, to enforce structural consistency over time, we propose a multi-view geometric loss that aligns the predicted depth maps across frames within a shared 3D coordinate system. Our method bridges the gap between appearance generation and 3D structure modeling, leading to improved spatio-temporal coherence, shape consistency, and physical plausibility. Experiments across multiple datasets show that our approach produces significantly more stable and geometrically consistent results than existing baselines.