SuperGS: Consistent and Detailed 3D Super-Resolution Scene Reconstruction via Gaussian Splatting

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the coarse primitives and detail loss in high-resolution novel view synthesis (HRNVS) caused by low-resolution inputs in 3D Gaussian Splatting (3DGS), this paper proposes a two-stage “coarse-to-fine” training framework. We introduce a geometry-aware adaptive densification strategy based on error-map back-projection and multi-view voting—the first of its kind for 3DGS. Additionally, we incorporate variational feature learning to model prediction uncertainty and employ dynamically weighted pseudo-label supervision to jointly optimize consistency and detail fidelity. The method integrates 3D Gaussian rasterization, implicit feature fields, depth-based back-projection, and multi-view geometric constraints. Evaluated on forward-facing and 360° datasets, our approach significantly outperforms existing HRNVS methods, achieving more detailed, temporally consistent, and real-time 3D reconstruction and rendering.

Technology Category

Application Category

📝 Abstract
Recently, 3D Gaussian Splatting (3DGS) has excelled in novel view synthesis (NVS) with its real-time rendering capabilities and superior quality. However, it encounters challenges for high-resolution novel view synthesis (HRNVS) due to the coarse nature of primitives derived from low-resolution input views. To address this issue, we propose SuperGS, an expansion of Scaffold-GS designed with a two-stage coarse-to-fine training framework. In the low-resolution stage, we introduce a latent feature field to represent the low-resolution scene, which serves as both the initialization and foundational information for super-resolution optimization. In the high-resolution stage, we propose a multi-view consistent densification strategy that backprojects high-resolution depth maps based on error maps and employs a multi-view voting mechanism, mitigating ambiguities caused by multi-view inconsistencies in the pseudo labels provided by 2D prior models while avoiding Gaussian redundancy. Furthermore, we model uncertainty through variational feature learning and use it to guide further scene representation refinement and adjust the supervisory effect of pseudo-labels, ensuring consistent and detailed scene reconstruction. Extensive experiments demonstrate that SuperGS outperforms state-of-the-art HRNVS methods on both forward-facing and 360-degree datasets.
Problem

Research questions and friction points this paper is trying to address.

Enables high-resolution 3D scene reconstruction from low-resolution inputs
Addresses multi-view inconsistencies in pseudo-labels for accurate depth mapping
Improves scene detail via variational feature learning and uncertainty modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage coarse-to-fine training framework
Multi-view consistent densification strategy
Variational feature learning for uncertainty modeling
🔎 Similar Papers
No similar papers found.