Geometry-Aware Diffusion Models for Multiview Scene Inpainting

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses 3D scene completion in occluded regions of multi-view images. We propose a geometry-aware conditional diffusion model that achieves high-fidelity, cross-view geometrically consistent image inpainting. Unlike methods relying on explicit/implicit radiance fields or requiring numerous input views, our approach jointly models geometric and appearance cues directly in the learned latent space. It incorporates a learnable implicit view-consistency constraint and geometry-guided cross-view feature alignment to circumvent blurring artifacts inherent in conventional fusion strategies. The method operates effectively under a few-view setting, substantially reducing data requirements. Evaluated on the SPIn-NeRF and NeRFiller benchmarks, it achieves state-of-the-art performance—particularly excelling in geometric consistency and fine-grained detail preservation.

Technology Category

Application Category

📝 Abstract
In this paper, we focus on 3D scene inpainting, where parts of an input image set, captured from different viewpoints, are masked out. The main challenge lies in generating plausible image completions that are geometrically consistent across views. Most recent work addresses this challenge by combining generative models with a 3D radiance field to fuse information across viewpoints. However, a major drawback of these methods is that they often produce blurry images due to the fusion of inconsistent cross-view images. To avoid blurry inpaintings, we eschew the use of an explicit or implicit radiance field altogether and instead fuse cross-view information in a learned space. In particular, we introduce a geometry-aware conditional generative model, capable of inpainting multi-view consistent images based on both geometric and appearance cues from reference images. A key advantage of our approach over existing methods is its unique ability to inpaint masked scenes with a limited number of views (i.e., few-view inpainting), whereas previous methods require relatively large image sets for their 3D model fitting step. Empirically, we evaluate and compare our scene-centric inpainting method on two datasets, SPIn-NeRF and NeRFiller, which contain images captured at narrow and wide baselines, respectively, and achieve state-of-the-art 3D inpainting performance on both. Additionally, we demonstrate the efficacy of our approach in the few-view setting compared to prior methods.
Problem

Research questions and friction points this paper is trying to address.

3D scene inpainting with geometric consistency
Avoiding blurry images in multiview inpainting
Few-view inpainting with limited image sets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned space cross-view fusion
Geometry-aware conditional generative model
Few-view consistent scene inpainting
🔎 Similar Papers
No similar papers found.
A
Ahmad Salimi
York University
Tristan Aumentado-Armstrong
Tristan Aumentado-Armstrong
Samsung AI Center (Toronto) | Postdoc from York Univ. | PhD from Univ. of Toronto
Artificial IntelligenceComputer VisionMachine LearningComputational Biology
M
Marcus A. Brubaker
York University, Vector Institute for AI, Google DeepMind
K
K. Derpanis
York University, Vector Institute for AI, Samsung AI Centre Toronto