๐ค AI Summary
To address cross-view texture and geometry inconsistency in repaired regions for novel-view synthesis using 3D Gaussian Splatting (3DGS), this paper proposes a depth-guided cross-view consistency optimization framework. It leverages per-view rendered depth to generate dynamic visibility masks, enabling joint optimization of visible background pixels in the 3DGS representation. Additionally, a self-supervised geometric consistency loss enforces joint alignment of geometry and appearance across arbitrary viewsโwithout requiring external supervision or explicit surface reconstruction. Our method introduces the first depth-driven multi-view mask propagation mechanism, which adaptively refines the Gaussian distribution based on depth-aware visibility. Evaluated on standard benchmarks, it achieves state-of-the-art performance, surpassing prior methods across all major metrics: PSNR, SSIM, and LPIPS. Qualitative results further demonstrate significant improvements in both cross-view consistency and visual fidelity.
๐ Abstract
When performing 3D inpainting using novel-view rendering methods like Neural Radiance Field (NeRF) or 3D Gaussian Splatting (3DGS), how to achieve texture and geometry consistency across camera views has been a challenge. In this paper, we propose a framework of 3D Gaussian Inpainting with Depth-Guided Cross-View Consistency (3DGIC) for cross-view consistent 3D inpainting. Guided by the rendered depth information from each training view, our 3DGIC exploits background pixels visible across different views for updating the inpainting mask, allowing us to refine the 3DGS for inpainting purposes.Through extensive experiments on benchmark datasets, we confirm that our 3DGIC outperforms current state-of-the-art 3D inpainting methods quantitatively and qualitatively.