LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion

📅 2025-07-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe rendering artifacts, semantic synthesis distortions, and weak open-vocabulary understanding in sparse-view 3D scene reconstruction, this paper proposes TriMap—a video diffusion model coupled with a language quantization compressor—for scene-agnostic 3D language embedding field reconstruction. Our method jointly diffuses RGB, surface normal, and semantic segmentation maps in a multi-channel framework, and leverages large-scale language embedding pretraining aligned at the 3D surface level to construct generalizable, open-vocabulary 3D semantic representations. Unlike prior approaches requiring per-scene optimization, TriMap enables zero-shot 3D reconstruction from sparse inputs. Experiments on real-world data demonstrate significant improvements over state-of-the-art methods, enabling high-fidelity 3D reconstruction, cross-scene semantic transfer, and natural language–driven scene querying and generation.

Technology Category

Application Category

📝 Abstract
Recovering 3D structures with open-vocabulary scene understanding from 2D images is a fundamental but daunting task. Recent developments have achieved this by performing per-scene optimization with embedded language information. However, they heavily rely on the calibrated dense-view reconstruction paradigm, thereby suffering from severe rendering artifacts and implausible semantic synthesis when limited views are available. In this paper, we introduce a novel generative framework, coined LangScene-X, to unify and generate 3D consistent multi-modality information for reconstruction and understanding. Powered by the generative capability of creating more consistent novel observations, we can build generalizable 3D language-embedded scenes from only sparse views. Specifically, we first train a TriMap video diffusion model that can generate appearance (RGBs), geometry (normals), and semantics (segmentation maps) from sparse inputs through progressive knowledge integration. Furthermore, we propose a Language Quantized Compressor (LQC), trained on large-scale image datasets, to efficiently encode language embeddings, enabling cross-scene generalization without per-scene retraining. Finally, we reconstruct the language surface fields by aligning language information onto the surface of 3D scenes, enabling open-ended language queries. Extensive experiments on real-world data demonstrate the superiority of our LangScene-X over state-of-the-art methods in terms of quality and generalizability. Project Page: https://liuff19.github.io/LangScene-X.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct 3D scenes from sparse 2D views
Generate consistent multi-modality 3D information
Enable open-vocabulary language queries in 3D
Innovation

Methods, ideas, or system contributions that make the work stand out.

TriMap video diffusion for multi-modality generation
Language Quantized Compressor for cross-scene generalization
Language surface fields for open-ended queries
🔎 Similar Papers
No similar papers found.