🤖 AI Summary
Existing video generation models struggle to disentangle and precisely control key scene factors such as illumination, layout, and camera trajectories, limiting their applicability in highly controllable scenarios like cinematic production. This work proposes LiVER, a framework that achieves explicit disentangled control over these elements for the first time. By leveraging a unified 3D scene representation to generate rendering-driven control signals, and integrating lightweight conditional modules with a progressive training strategy, LiVER enables high-fidelity image- and video-to-video synthesis. The method is further supported by the introduction of the first large-scale, densely 3D-annotated dataset and a scene agent capable of automatically translating natural language instructions into 3D control signals. LiVER significantly enhances controllability and practicality while maintaining state-of-the-art photorealism and temporal consistency.
📝 Abstract
Diffusion models have achieved remarkable progress in video generation, but their controllability remains a major limitation. Key scene factors such as layout, lighting, and camera trajectory are often entangled or only weakly modeled, restricting their applicability in domains like filmmaking and virtual production where explicit scene control is essential. We present LiVER, a diffusion-based framework for scene-controllable video generation. To achieve this, we introduce a novel framework that conditions video synthesis on explicit 3D scene properties, supported by a new large-scale dataset with dense annotations of object layout, lighting, and camera parameters. Our method disentangles these properties by rendering control signals from a unified 3D representation. We propose a lightweight conditioning module and a progressive training strategy to integrate these signals into a foundational video diffusion model, ensuring stable convergence and high fidelity. Our framework enables a wide range of applications, including image-to-video and video-to-video synthesis where the underlying 3D scene is fully editable. To further enhance usability, we develop a scene agent that automatically translates high-level user instructions into the required 3D control signals. Experiments show that LiVER achieves state-of-the-art photorealism and temporal consistency while enabling precise, disentangled control over scene factors, setting a new standard for controllable video generation.