🤖 AI Summary
This work proposes an implicit 3D-aware controllable video generation method to address the challenges of entangled scene and subject representation, limited camera motion control, and high costs of physical set construction in cinematic-quality video synthesis. By leveraging a VGGT encoder to extract static scene features and introducing an implicit 3D contextual conditioning mechanism combined with an input image shuffling strategy, the approach effectively decouples and injects scene information into a pre-trained text-to-video model. The method enables user-specified large-scale camera trajectories while preserving scene consistency, facilitating high-fidelity synthesis of dynamic subjects within diverse environments. Evaluated on a scene-disentangled dataset built with Unreal Engine 5, the proposed approach demonstrates state-of-the-art performance in cross-environment cinematic video generation.
📝 Abstract
Cinematic video production requires control over scene-subject composition and camera movement, but live-action shooting remains costly due to the need for constructing physical sets. To address this, we introduce the task of cinematic video generation with decoupled scene context: given multiple images of a static environment, the goal is to synthesize high-quality videos featuring dynamic subject while preserving the underlying scene consistency and following a user-specified camera trajectory. We present CineScene, a framework that leverages implicit 3D-aware scene representation for cinematic video generation. Our key innovation is a novel context conditioning mechanism that injects 3D-aware features in an implicit way: By encoding scene images into visual representations through VGGT, CineScene injects spatial priors into a pretrained text-to-video generation model by additional context concatenation, enabling camera-controlled video synthesis with consistent scenes and dynamic subjects. To further enhance the model's robustness, we introduce a simple yet effective random-shuffling strategy for the input scene images during training. To address the lack of training data, we construct a scene-decoupled dataset with Unreal Engine 5, containing paired videos of scenes with and without dynamic subjects, panoramic images representing the underlying static scene, along with their camera trajectories. Experiments show that CineScene achieves state-of-the-art performance in scene-consistent cinematic video generation, handling large camera movements and demonstrating generalization across diverse environments.