SplatFill: 3D Scene Inpainting via Depth-Guided Gaussian Splatting

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenging problem of repairing missing regions in 3D Gaussian Splatting (3DGS) caused by occlusion or editing—often leading to geometric distortion, texture blurring, and cross-view artifacts—this paper proposes a depth-guided, object-aware inpainting framework. Methodologically, it jointly leverages depth priors and object mask supervision to enable consistency-aware, fine-grained reconstruction: precisely placing Gaussian ellipsoids and selectively refining local parameters. Depth consistency loss and multi-view geometric constraints are further introduced to enhance 3D structural fidelity. Evaluated on the SPIn-NeRF dataset, our method achieves state-of-the-art visual quality, improves training efficiency by 24.5%, and significantly suppresses blurring and artifacts while enhancing geometric completeness and cross-view consistency.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has enabled the creation of highly realistic 3D scene representations from sets of multi-view images. However, inpainting missing regions, whether due to occlusion or scene editing, remains a challenging task, often leading to blurry details, artifacts, and inconsistent geometry. In this work, we introduce SplatFill, a novel depth-guided approach for 3DGS scene inpainting that achieves state-of-the-art perceptual quality and improved efficiency. Our method combines two key ideas: (1) joint depth-based and object-based supervision to ensure inpainted Gaussians are accurately placed in 3D space and aligned with surrounding geometry, and (2) we propose a consistency-aware refinement scheme that selectively identifies and corrects inconsistent regions without disrupting the rest of the scene. Evaluations on the SPIn-NeRF dataset demonstrate that SplatFill not only surpasses existing NeRF-based and 3DGS-based inpainting methods in visual fidelity but also reduces training time by 24.5%. Qualitative results show our method delivers sharper details, fewer artifacts, and greater coherence across challenging viewpoints.
Problem

Research questions and friction points this paper is trying to address.

Inpainting missing regions in 3D Gaussian Splatting scenes
Addressing blurry details and inconsistent geometry artifacts
Improving efficiency and perceptual quality in 3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Depth-guided Gaussian Splatting for 3D inpainting
Joint depth and object supervision for geometry alignment
Consistency-aware refinement for selective region correction
🔎 Similar Papers
No similar papers found.
M
Mahtab Dahaghin
Pattern Analysis and Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT)
M
Milind G. Padalkar
Pattern Analysis and Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT)
M
Matteo Toso
Pattern Analysis and Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT)
Alessio Del Bue
Alessio Del Bue
Fondazione Istituto Italiano di Tecnologia (IIT)
Computer VisionArtificial Intelligence