🤖 AI Summary
To address the limitation of 3D reconstruction caused by scarce real-world multi-view data, this paper proposes a self-distillation framework that requires no real multi-view supervision—marking the first approach to extract implicit 3D priors from video diffusion models and explicitly convert them into 3D Gaussian Splatting (3DGS) representations. Our method leverages video diffusion models to synthesize high-fidelity multi-view sequences, then jointly trains an RGB decoder and a 3DGS decoder via self-supervised distillation. It supports both static and dynamic scene reconstruction from either a single image or text input. Evaluated on multiple benchmarks, our method achieves state-of-the-art performance. The resulting 3DGS scenes enable real-time rendering and interactive manipulation, establishing a data-efficient, end-to-end differentiable paradigm for 3D environment modeling—particularly beneficial for applications such as robotic simulation.
📝 Abstract
The ability to generate virtual environments is crucial for applications ranging from gaming to physical AI domains such as robotics, autonomous driving, and industrial AI. Current learning-based 3D reconstruction methods rely on the availability of captured real-world multi-view data, which is not always readily available. Recent advancements in video diffusion models have shown remarkable imagination capabilities, yet their 2D nature limits the applications to simulation where a robot needs to navigate and interact with the environment. In this paper, we propose a self-distillation framework that aims to distill the implicit 3D knowledge in the video diffusion models into an explicit 3D Gaussian Splatting (3DGS) representation, eliminating the need for multi-view training data. Specifically, we augment the typical RGB decoder with a 3DGS decoder, which is supervised by the output of the RGB decoder. In this approach, the 3DGS decoder can be purely trained with synthetic data generated by video diffusion models. At inference time, our model can synthesize 3D scenes from either a text prompt or a single image for real-time rendering. Our framework further extends to dynamic 3D scene generation from a monocular input video. Experimental results show that our framework achieves state-of-the-art performance in static and dynamic 3D scene generation.