Scene4U: Hierarchical Layered 3D Scene Reconstruction from Single Panoramic Image for Your Immerse Exploration

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address pervasive visual discontinuities, scene voids, and dynamic object interference in single-image panoramic 3D reconstruction, this work proposes a hierarchical disentanglement and inpainting framework. First, it introduces a novel open-vocabulary segmentation–large language model (LLM) co-driven hierarchical scene decomposition paradigm. Second, it designs a diffusion-based hierarchical inpainting module to achieve semantically consistent filling of occluded regions. Third, it establishes a semantic-structure-aligned hierarchical optimization pipeline for 3D Gaussian Splatting. Quantitatively, the method achieves state-of-the-art performance, reducing LPIPS by 24.24% and BRISQUE by 24.40% over prior art, while also attaining the fastest training speed among existing approaches. Furthermore, we release WorldVista3D—the first large-scale panoramic 3D reconstruction dataset covering global landmarks—establishing a new benchmark for immersive scene reconstruction.

Technology Category

Application Category

📝 Abstract
The reconstruction of immersive and realistic 3D scenes holds significant practical importance in various fields of computer vision and computer graphics. Typically, immersive and realistic scenes should be free from obstructions by dynamic objects, maintain global texture consistency, and allow for unrestricted exploration. The current mainstream methods for image-driven scene construction involves iteratively refining the initial image using a moving virtual camera to generate the scene. However, previous methods struggle with visual discontinuities due to global texture inconsistencies under varying camera poses, and they frequently exhibit scene voids caused by foreground-background occlusions. To this end, we propose a novel layered 3D scene reconstruction framework from panoramic image, named Scene4U. Specifically, Scene4U integrates an open-vocabulary segmentation model with a large language model to decompose a real panorama into multiple layers. Then, we employs a layered repair module based on diffusion model to restore occluded regions using visual cues and depth information, generating a hierarchical representation of the scene. The multi-layer panorama is then initialized as a 3D Gaussian Splatting representation, followed by layered optimization, which ultimately produces an immersive 3D scene with semantic and structural consistency that supports free exploration. Scene4U outperforms state-of-the-art method, improving by 24.24% in LPIPS and 24.40% in BRISQUE, while also achieving the fastest training speed. Additionally, to demonstrate the robustness of Scene4U and allow users to experience immersive scenes from various landmarks, we build WorldVista3D dataset for 3D scene reconstruction, which contains panoramic images of globally renowned sites. The implementation code and dataset will be released at https://github.com/LongHZ140516/Scene4U .
Problem

Research questions and friction points this paper is trying to address.

Reconstructs 3D scenes from single panoramic images
Addresses visual discontinuities and scene voids
Ensures semantic and structural consistency for free exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layered 3D reconstruction from panoramic image
Open-vocabulary segmentation with large language model
Layered repair using diffusion model and depth