3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in long-video generation: weak scene consistency, imprecise camera control, and erroneous persistence of dynamic elements across temporal boundaries. To this end, we propose a video generation framework supporting arbitrary-length input sequences. Our method introduces (1) a 3D scene memory mechanism that jointly leverages dynamic SLAM and adaptive dynamic masking to explicitly decouple static geometry from dynamic content; and (2) dual spatiotemporal conditioning, which fuses spatiotemporal features from adjacent frames and incorporates static scene geometry via geometric projection—thereby ensuring long-term spatial coherence and controllable free-viewpoint rendering. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods in scene consistency, camera motion accuracy, and visual quality, while maintaining computational efficiency and realistic motion dynamics.

Technology Category

Application Category

📝 Abstract
We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditioning that reformulates context-view referencing across the input video. Our approach conditions on both temporally adjacent frames for motion continuity and spatially adjacent content for scene consistency. However, when generating beyond temporal boundaries, directly using spatially adjacent frames would incorrectly preserve dynamic elements from the past. We address this by introducing a 3D scene memory that represents exclusively the static geometry extracted from the entire input video. To construct this memory, we leverage dynamic SLAM with our newly introduced dynamic masking strategy that explicitly separates static scene geometry from moving elements. The static scene representation can then be projected to any target viewpoint, providing geometrically consistent warped views that serve as strong 3D spatial prompts while allowing dynamic regions to evolve naturally from temporal context. This enables our model to maintain long-range spatial coherence and precise camera control without sacrificing computational efficiency or motion realism. Extensive experiments demonstrate that our framework significantly outperforms existing methods in scene consistency, camera controllability, and generation quality. Project page : https://cvlab-kaist.github.io/3DScenePrompt/
Problem

Research questions and friction points this paper is trying to address.

Generating consistent long videos with precise camera control
Separating static geometry from dynamic elements in videos
Maintaining spatial coherence across arbitrary-length video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic SLAM with masking separates static geometry
3D scene memory stores static elements from entire video
Dual spatio-temporal conditioning enables camera control
🔎 Similar Papers