High-fidelity 3D Gaussian Inpainting: preserving multi-view consistency and photorealistic details

πŸ“… 2025-07-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address multi-view inconsistency and geometric detail distortion in 3D scene inpainting, this paper proposes a high-fidelity 3D Gaussian splatting (3DGS)-based inpainting framework leveraging sparse repair views. The method introduces two key innovations: (1) an automatic mask optimization strategy guided by region-aware uncertainty estimation, where Gaussian scene filtering and back-projection refine occlusion masks to improve localization accuracy and boundary naturalness; and (2) a hybrid 3DGS–NeRF modeling scheme enabling joint multi-view training and fine-grained detail enhancement. Extensive experiments on standard benchmarks demonstrate that our approach achieves superior visual realism, cross-view consistency, and geometric fidelity compared to state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in multi-view 3D reconstruction and novel-view synthesis, particularly through Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have greatly enhanced the fidelity and efficiency of 3D content creation. However, inpainting 3D scenes remains a challenging task due to the inherent irregularity of 3D structures and the critical need for maintaining multi-view consistency. In this work, we propose a novel 3D Gaussian inpainting framework that reconstructs complete 3D scenes by leveraging sparse inpainted views. Our framework incorporates an automatic Mask Refinement Process and region-wise Uncertainty-guided Optimization. Specifically, we refine the inpainting mask using a series of operations, including Gaussian scene filtering and back-projection, enabling more accurate localization of occluded regions and realistic boundary restoration. Furthermore, our Uncertainty-guided Fine-grained Optimization strategy, which estimates the importance of each region across multi-view images during training, alleviates multi-view inconsistencies and enhances the fidelity of fine details in the inpainted results. Comprehensive experiments conducted on diverse datasets demonstrate that our approach outperforms existing state-of-the-art methods in both visual quality and view consistency.
Problem

Research questions and friction points this paper is trying to address.

Achieving multi-view consistency in 3D scene inpainting
Preserving photorealistic details during 3D reconstruction
Handling irregular 3D structures for accurate inpainting
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Gaussian inpainting with sparse views
Mask Refinement Process for accurate localization
Uncertainty-guided Optimization for multi-view consistency
πŸ”Ž Similar Papers
No similar papers found.
J
Jun Zhou
School of Information Science and Technology, Dalian Maritime University, Dalian, China
D
Dinghao Li
School of Information Science and Technology, Dalian Maritime University, Dalian, China
Nannan Li
Nannan Li
PhD at Boston University
Generative ModelsComputer Vision
Mingjie Wang
Mingjie Wang
School of Science, Zhejiang Sci-Tech University, Zhe Jiang, China