Geometry-as-context: Modulating Explicit 3D in Scene-consistent Video Generation to Geometry Context

๐Ÿ“… 2026-02-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods for scene-consistent video generation often suffer from error accumulation due to reliance on external memory, non-differentiable operations, or decoupled multi-model architectures, leading to degraded consistency. This work proposes a โ€œGeometry-as-Contextโ€ framework that integrates geometric information as dynamic context within an autoregressive video generation process, enabling end-to-end training through alternating estimation of current-view geometry and rendering of novel views. Key innovations include a camera-gated attention mechanism to enhance pose awareness, interleaved training of geometry and RGB sequences, and stochastic dropping of geometric context during training to support pure RGB inference. Experiments demonstrate that the proposed method significantly outperforms existing approaches under both unidirectional and round-trip camera trajectories, achieving substantial improvements in scene consistency and camera control accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Scene-consistent video generation aims to create videos that explore 3D scenes based on a camera trajectory. Previous methods rely on video generation models with external memory for consistency, or iterative 3D reconstruction and inpainting, which accumulate errors during inference due to incorrect intermediary outputs, non-differentiable processes, and separate models. To overcome these limitations, we introduce ``geometry-as-context". It iteratively completes the following steps using an autoregressive camera-controlled video generation model: (1) estimates the geometry of the current view necessary for 3D reconstruction, and (2) simulates and restores novel view images rendered by the 3D scene. Under this multi-task framework, we develop the camera gated attention module to enhance the model's capability to effectively leverage camera poses. During the training phase, text contexts are utilized to ascertain whether geometric or RGB images should be generated. To ensure that the model can generate RGB-only outputs during inference, the geometry context is randomly dropped from the interleaved text-image-geometry training sequence. The method has been tested on scene video generation with one-direction and forth-and-back trajectories. The results show its superiority over previous approaches in maintaining scene consistency and camera control.
Problem

Research questions and friction points this paper is trying to address.

scene-consistent video generation
3D reconstruction
camera trajectory
geometry context
video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

geometry-as-context
scene-consistent video generation
camera-controlled generation
autoregressive 3D video model
camera gated attention
๐Ÿ”Ž Similar Papers