RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of precise camera motion control and the absence of depth and scale priors in realistic image/video generation, this paper introduces the first framework enabling interactive, absolute-scale 3D scene camera trajectory editing. Methodologically: (1) it proposes monocular metric-depth-guided absolute-scale 3D pre-reconstruction, yielding physically interpretable scene geometry; (2) it designs a scene-constrained noise shaping mechanism that jointly optimizes camera pose accuracy and spatiotemporal video consistency across high- and low-noise diffusion timesteps. The approach integrates metric depth estimation, normalized camera parameter modeling, and hierarchical noise-space regulation. Evaluated on RealEstate10K and cross-domain images, the method significantly improves controllability and video fidelity, enabling camera-driven looping video synthesis and generative frame interpolation.

Technology Category

Application Category

📝 Abstract
Recent advancements in camera-trajectory-guided image-to-video generation offer higher precision and better support for complex camera control compared to text-based approaches. However, they also introduce significant usability challenges, as users often struggle to provide precise camera parameters when working with arbitrary real-world images without knowledge of their depth nor scene scale. To address these real-world application issues, we propose RealCam-I2V, a novel diffusion-based video generation framework that integrates monocular metric depth estimation to establish 3D scene reconstruction in a preprocessing step. During training, the reconstructed 3D scene enables scaling camera parameters from relative to absolute values, ensuring compatibility and scale consistency across diverse real-world images. In inference, RealCam-I2V offers an intuitive interface where users can precisely draw camera trajectories by dragging within the 3D scene. To further enhance precise camera control and scene consistency, we propose scene-constrained noise shaping, which shapes high-level noise and also allows the framework to maintain dynamic, coherent video generation in lower noise stages. RealCam-I2V achieves significant improvements in controllability and video quality on the RealEstate10K and out-of-domain images. We further enables applications like camera-controlled looping video generation and generative frame interpolation. We will release our absolute-scale annotation, codes, and all checkpoints. Please see dynamic results in https://zgctroy.github.io/RealCam-I2V.
Problem

Research questions and friction points this paper is trying to address.

Enhances image-to-video generation precision
Simplifies complex camera control usability
Improves video quality and controllability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates monocular metric depth estimation
Implements scene-constrained noise shaping
Offers intuitive 3D camera trajectory drawing
🔎 Similar Papers
No similar papers found.
T
Teng Li
College of Computer Science & Technology, Zhejiang University
Guangcong Zheng
Guangcong Zheng
College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
Controllable Video/Image SynthesisDiffusion ModelPersonalization Generation Multi-ModalBEV
Rui Jiang
Rui Jiang
College of Computer Science & Technology, Zhejiang University
S
Shuigenzhan
College of Computer Science & Technology, Zhejiang University
T
Tao Wu
College of Computer Science & Technology, Zhejiang University
Yehao Lu
Yehao Lu
Zhejiang University
Autonomous Driving3D ReconstructionSwarm Robot
Y
Yining Lin
College of Computer Science & Technology, Zhejiang University
X
Xi Li
College of Computer Science & Technology, Zhejiang University