Efficient Camera-Controlled Video Generation of Static Scenes via Sparse Diffusion and 3D Rendering

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of existing diffusion models in video generation, which hinders real-time interactive applications. The authors propose an efficient method for synthesizing videos of static scenes by first generating sparse keyframes with a diffusion model, then leveraging 3D reconstruction and differentiable rendering to interpolate a full video sequence. A novel camera trajectory-aware adaptive keyframe scheduling mechanism dynamically adjusts keyframe density to preserve geometric consistency throughout the sequence. The approach achieves high visual fidelity and temporal stability while accelerating 20-second video generation by over 40× compared to baseline diffusion models.

Technology Category

Application Category

📝 Abstract
Modern video generative models based on diffusion models can produce very realistic clips, but they are computationally inefficient, often requiring minutes of GPU time for just a few seconds of video. This inefficiency poses a critical barrier to deploying generative video in applications that require real-time interactions, such as embodied AI and VR/AR. This paper explores a new strategy for camera-conditioned video generation of static scenes: using diffusion-based generative models to generate a sparse set of keyframes, and then synthesizing the full video through 3D reconstruction and rendering. By lifting keyframes into a 3D representation and rendering intermediate views, our approach amortizes the generation cost across hundreds of frames while enforcing geometric consistency. We further introduce a model that predicts the optimal number of keyframes for a given camera trajectory, allowing the system to adaptively allocate computation. Our final method, SRENDER, uses very sparse keyframes for simple trajectories and denser ones for complex camera motion. This results in video generation that is more than 40 times faster than the diffusion-based baseline in generating 20 seconds of video, while maintaining high visual fidelity and temporal stability, offering a practical path toward efficient and controllable video synthesis.
Problem

Research questions and friction points this paper is trying to address.

video generation
computational efficiency
camera-controlled
real-time interaction
static scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse diffusion
3D rendering
camera-conditioned video generation
keyframe prediction
efficient video synthesis
🔎 Similar Papers
No similar papers found.
J
Jieying Chen
University of Cambridge
J
Jeffrey Hu
University of Cambridge
J
Joan Lasenby
University of Cambridge
Ayush Tewari
Ayush Tewari
University of Cambridge