CoMapGS: Covisibility Map-based Gaussian Splatting for Sparse Novel View Synthesis

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degraded geometric and appearance reconstruction in novel view synthesis under sparse-view settings, particularly in regions with insufficient co-visibility. To this end, we propose a co-visibility map-guided 3D Gaussian splatting framework. Our method introduces a co-visibility-aware adaptive weighting scheme for the rendering loss, an uncertainty-driven proximity classifier, and a COLMAP point cloud–enhanced Gaussian initialization strategy. Crucially, it is the first to explicitly model co-visibility as a geometric prior, enabling joint optimization across high- and low-uncertainty regions. Extensive experiments demonstrate state-of-the-art performance on the Mip-NeRF 360 and LLFF benchmarks. Moreover, our approach exhibits strong robustness and generalization across varying degrees of input sparsity, outperforming existing methods consistently in both quantitative metrics and visual quality.

Technology Category

Application Category

📝 Abstract
We propose Covisibility Map-based Gaussian Splatting (CoMapGS), designed to recover underrepresented sparse regions in sparse novel view synthesis. CoMapGS addresses both high- and low-uncertainty regions by constructing covisibility maps, enhancing initial point clouds, and applying uncertainty-aware weighted supervision using a proximity classifier. Our contributions are threefold: (1) CoMapGS reframes novel view synthesis by leveraging covisibility maps as a core component to address region-specific uncertainty; (2) Enhanced initial point clouds for both low- and high-uncertainty regions compensate for sparse COLMAP-derived point clouds, improving reconstruction quality and benefiting few-shot 3DGS methods; (3) Adaptive supervision with covisibility-score-based weighting and proximity classification achieves consistent performance gains across scenes with varying sparsity scores derived from covisibility maps. Experimental results demonstrate that CoMapGS outperforms state-of-the-art methods on datasets including Mip-NeRF 360 and LLFF.
Problem

Research questions and friction points this paper is trying to address.

Recover underrepresented sparse regions in novel view synthesis
Address high- and low-uncertainty regions using covisibility maps
Enhance sparse point clouds for improved reconstruction quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses covisibility maps for uncertainty-aware synthesis
Enhances sparse point clouds for better reconstruction
Applies adaptive weighted supervision with proximity classification
🔎 Similar Papers
No similar papers found.