Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of 3D reconstruction caused by scarce real-world multi-view data, this paper proposes a self-distillation framework that requires no real multi-view supervision—marking the first approach to extract implicit 3D priors from video diffusion models and explicitly convert them into 3D Gaussian Splatting (3DGS) representations. Our method leverages video diffusion models to synthesize high-fidelity multi-view sequences, then jointly trains an RGB decoder and a 3DGS decoder via self-supervised distillation. It supports both static and dynamic scene reconstruction from either a single image or text input. Evaluated on multiple benchmarks, our method achieves state-of-the-art performance. The resulting 3DGS scenes enable real-time rendering and interactive manipulation, establishing a data-efficient, end-to-end differentiable paradigm for 3D environment modeling—particularly beneficial for applications such as robotic simulation.

Technology Category

Application Category

📝 Abstract
The ability to generate virtual environments is crucial for applications ranging from gaming to physical AI domains such as robotics, autonomous driving, and industrial AI. Current learning-based 3D reconstruction methods rely on the availability of captured real-world multi-view data, which is not always readily available. Recent advancements in video diffusion models have shown remarkable imagination capabilities, yet their 2D nature limits the applications to simulation where a robot needs to navigate and interact with the environment. In this paper, we propose a self-distillation framework that aims to distill the implicit 3D knowledge in the video diffusion models into an explicit 3D Gaussian Splatting (3DGS) representation, eliminating the need for multi-view training data. Specifically, we augment the typical RGB decoder with a 3DGS decoder, which is supervised by the output of the RGB decoder. In this approach, the 3DGS decoder can be purely trained with synthetic data generated by video diffusion models. At inference time, our model can synthesize 3D scenes from either a text prompt or a single image for real-time rendering. Our framework further extends to dynamic 3D scene generation from a monocular input video. Experimental results show that our framework achieves state-of-the-art performance in static and dynamic 3D scene generation.
Problem

Research questions and friction points this paper is trying to address.

Generating 3D scenes without requiring real-world multi-view training data
Distilling 2D video diffusion models into explicit 3D Gaussian representations
Enabling real-time 3D scene synthesis from text or single image inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-distillation framework distills video diffusion models
Uses 3D Gaussian Splatting decoder supervised by RGB
Generates 3D scenes from text or single image
🔎 Similar Papers
No similar papers found.