A Recipe for Generating 3D Worlds From a Single Image

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces a novel paradigm for immersive 3D world generation from a single image—without requiring large-scale training. Addressing the single-image-to-3D-environment reconstruction task, the method proceeds in two stages: first, leveraging a pre-trained diffusion model to synthesize geometrically coherent panoramic images; second, elevating these to 3D via a metric depth estimator and performing 2D inpainting of occluded regions conditioned on rendered point clouds. The core contribution lies in reformulating single-image 3D generation as an in-context learning problem—explicitly modeling 3D structure while bypassing error accumulation inherent in video-synthesis-based approaches. Evaluated on both synthetic and real-world images, the framework produces VR-ready, high-fidelity 3D environments, consistently outperforming state-of-the-art video-synthesis methods across standard metrics including FID, LPIPS, and SSIM.

Technology Category

Application Category

📝 Abstract
We introduce a recipe for generating immersive 3D worlds from a single image by framing the task as an in-context learning problem for 2D inpainting models. This approach requires minimal training and uses existing generative models. Our process involves two steps: generating coherent panoramas using a pre-trained diffusion model and lifting these into 3D with a metric depth estimator. We then fill unobserved regions by conditioning the inpainting model on rendered point clouds, requiring minimal fine-tuning. Tested on both synthetic and real images, our method produces high-quality 3D environments suitable for VR display. By explicitly modeling the 3D structure of the generated environment from the start, our approach consistently outperforms state-of-the-art, video synthesis-based methods along multiple quantitative image quality metrics. Project Page: https://katjaschwarz.github.io/worlds/
Problem

Research questions and friction points this paper is trying to address.

Generating 3D worlds from single images
Using 2D inpainting models with minimal training
Outperforming video synthesis-based methods in quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates 3D worlds from single images
Uses pre-trained diffusion and depth models
Fills unobserved regions with inpainting
🔎 Similar Papers
No similar papers found.
K
Katja Schwarz
Meta Reality Labs Zurich, Switzerland
Denys Rozumnyi
Denys Rozumnyi
Researcher
Computer Vision3D Reconstruction
S
S. R. Bulò
Meta Reality Labs Zurich, Switzerland
L
L. Porzi
Meta Reality Labs Zurich, Switzerland
P
P. Kontschieder
Meta Reality Labs Zurich, Switzerland