RI3D: Few-Shot Gaussian Splatting With Repair and Inpainting Diffusion Priors

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses high-fidelity novel-view synthesis from extremely sparse inputs (e.g., only 1–3 images). To tackle the inherent ambiguity, we propose a task-decoupled, two-stage diffusion optimization framework: explicitly separating visible-region reconstruction from invisible-region hallucination, and incorporating tailored conditional inpainting and completion diffusion priors. We introduce a high-precision Gaussian initialization method that jointly enforces geometric consistency, surface smoothness, and detail-level relative depth constraints. Furthermore, we design a synergistic optimization pipeline integrating 3D Gaussian splatting with multi-scale feature fusion. Under extremely sparse settings, our approach significantly improves texture fidelity and structural integrity—achieving state-of-the-art performance in both visible and occluded regions—while demonstrating superior generalization compared to existing methods.

Technology Category

Application Category

📝 Abstract
In this paper, we propose RI3D, a novel 3DGS-based approach that harnesses the power of diffusion models to reconstruct high-quality novel views given a sparse set of input images. Our key contribution is separating the view synthesis process into two tasks of reconstructing visible regions and hallucinating missing regions, and introducing two personalized diffusion models, each tailored to one of these tasks. Specifically, one model ('repair') takes a rendered image as input and predicts the corresponding high-quality image, which in turn is used as a pseudo ground truth image to constrain the optimization. The other model ('inpainting') primarily focuses on hallucinating details in unobserved areas. To integrate these models effectively, we introduce a two-stage optimization strategy: the first stage reconstructs visible areas using the repair model, and the second stage reconstructs missing regions with the inpainting model while ensuring coherence through further optimization. Moreover, we augment the optimization with a novel Gaussian initialization method that obtains per-image depth by combining 3D-consistent and smooth depth with highly detailed relative depth. We demonstrate that by separating the process into two tasks and addressing them with the repair and inpainting models, we produce results with detailed textures in both visible and missing regions that outperform state-of-the-art approaches on a diverse set of scenes with extremely sparse inputs.
Problem

Research questions and friction points this paper is trying to address.

Reconstruct high-quality 3D views from sparse images.
Separate view synthesis into visible and missing regions.
Use repair and inpainting models for detailed texture generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Separates view synthesis into two tasks
Uses personalized repair and inpainting diffusion models
Introduces two-stage optimization with Gaussian initialization
🔎 Similar Papers
No similar papers found.