🤖 AI Summary
This work addresses the challenge of generating photorealistic 3D urban scenes while simultaneously preserving fine-grained appearance details and enabling precise camera control—a trade-off that existing methods struggle to balance. Pure 3D diffusion models often lose visual fidelity, whereas purely 2D approaches compromise viewpoint controllability. To overcome this limitation, we propose ScenDi, the first framework that cascades 3D Gaussian representations with 2D video diffusion models. Our method first generates a low-resolution 3D Gaussian scene via 3D latent diffusion and then enhances geometric and textural details using a 2D video diffusion model while strictly adhering to the prescribed camera trajectory. ScenDi supports multimodal conditioning—including text prompts, road maps, and 3D bounding boxes—and demonstrates superior performance in both visual fidelity and camera control on the Waymo and KITTI-360 benchmarks.
📝 Abstract
Recent advancements in 3D object generation using diffusion models have achieved remarkable success, but generating realistic 3D urban scenes remains challenging. Existing methods relying solely on 3D diffusion models tend to suffer a degradation in appearance details, while those utilizing only 2D diffusion models typically compromise camera controllability. To overcome this limitation, we propose ScenDi, a method for urban scene generation that integrates both 3D and 2D diffusion models. We first train a 3D latent diffusion model to generate 3D Gaussians, enabling the rendering of images at a relatively low resolution. To enable controllable synthesis, this 3DGS generation process can be optionally conditioned by specifying inputs such as 3d bounding boxes, road maps, or text prompts. Then, we train a 2D video diffusion model to enhance appearance details conditioned on rendered images from the 3D Gaussians. By leveraging the coarse 3D scene as guidance for 2D video diffusion, ScenDi generates desired scenes based on input conditions and successfully adheres to accurate camera trajectories. Experiments on two challenging real-world datasets, Waymo and KITTI-360, demonstrate the effectiveness of our approach.