DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing camera-control video generation methods rely on ray-based pose modeling but lack deep geometric scene understanding, resulting in substantial camera motion errors and poor visual-geometric consistency. To address this, we propose a dual-branch diffusion model that separately models RGB appearance and depth geometry, coupled with a Semantic-guided Inter-branch Geometric Mutual Alignment (SIGMA) mechanism to enable decoupled yet cooperative optimization. Furthermore, we uncover the complementary roles of depth and camera pose during staged denoising, introducing semantic-guided RGB-depth fusion and ray-based pose conditioning. Experiments demonstrate that our method reduces camera motion error by over 40%, while significantly improving both visual fidelity and geometric accuracy of generated videos.

Technology Category

Application Category

📝 Abstract
This paper presents DualCamCtrl, a novel end-to-end diffusion model for camera-controlled video generation. Recent works have advanced this field by representing camera poses as ray-based conditions, yet they often lack sufficient scene understanding and geometric awareness. DualCamCtrl specifically targets this limitation by introducing a dual-branch framework that mutually generates camera-consistent RGB and depth sequences. To harmonize these two modalities, we further propose the Semantic Guided Mutual Alignment (SIGMA) mechanism, which performs RGB-depth fusion in a semantics-guided and mutually reinforced manner. These designs collectively enable DualCamCtrl to better disentangle appearance and geometry modeling, generating videos that more faithfully adhere to the specified camera trajectories. Additionally, we analyze and reveal the distinct influence of depth and camera poses across denoising stages and further demonstrate that early and late stages play complementary roles in forming global structure and refining local details. Extensive experiments demonstrate that DualCamCtrl achieves more consistent camera-controlled video generation, with over 40% reduction in camera motion errors compared with prior methods. Our project page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/
Problem

Research questions and friction points this paper is trying to address.

Addressing insufficient scene understanding in camera-controlled video generation
Improving geometric awareness through dual-branch RGB and depth generation
Reducing camera motion errors by disentangling appearance and geometry modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-branch framework generates RGB and depth sequences
Semantic Guided Mutual Alignment fuses RGB-depth modalities
Early and late denoising stages complement global-local details
🔎 Similar Papers
No similar papers found.