Free360: Layered Gaussian Splatting for Unbounded 360-Degree View Synthesis from Extremely Sparse and Unposed Views

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of high-fidelity novel-view synthesis and 3D reconstruction from extremely sparse, unposed images in unbounded 360° scenes, this paper proposes Hierarchical Gaussian Splatting—a novel explicit representation that hierarchically decouples near-field and far-field spatial ambiguity. We design a dense stereo-reconstruction-guided, layer-specific optimization strategy and introduce, for the first time, an iterative reconstruction-generation fusion framework coupled with an uncertainty-aware training mechanism. Our method preserves unbounded scene modeling capability while significantly improving geometric fidelity and rendering quality under sparse-view conditions. Experiments demonstrate consistent state-of-the-art performance across diverse unbounded scenes, achieving superior results in PSNR, SSIM, LPIPS, and surface reconstruction error (Chamfer Distance). This work establishes a new paradigm for open-world 3D understanding from sparse, uncalibrated imagery.

Technology Category

Application Category

📝 Abstract
Neural rendering has demonstrated remarkable success in high-quality 3D neural reconstruction and novel view synthesis with dense input views and accurate poses. However, applying it to extremely sparse, unposed views in unbounded 360{deg} scenes remains a challenging problem. In this paper, we propose a novel neural rendering framework to accomplish the unposed and extremely sparse-view 3D reconstruction in unbounded 360{deg} scenes. To resolve the spatial ambiguity inherent in unbounded scenes with sparse input views, we propose a layered Gaussian-based representation to effectively model the scene with distinct spatial layers. By employing a dense stereo reconstruction model to recover coarse geometry, we introduce a layer-specific bootstrap optimization to refine the noise and fill occluded regions in the reconstruction. Furthermore, we propose an iterative fusion of reconstruction and generation alongside an uncertainty-aware training approach to facilitate mutual conditioning and enhancement between these two processes. Comprehensive experiments show that our approach outperforms existing state-of-the-art methods in terms of rendering quality and surface reconstruction accuracy. Project page: https://zju3dv.github.io/free360/
Problem

Research questions and friction points this paper is trying to address.

Synthesizing 360-degree views from sparse unposed images
Modeling unbounded scenes with layered Gaussian representation
Improving reconstruction via iterative fusion and uncertainty-aware training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layered Gaussian-based representation for spatial ambiguity
Layer-specific bootstrap optimization for noise refinement
Iterative fusion with uncertainty-aware training enhancement
🔎 Similar Papers
No similar papers found.