Splat and Replace: 3D Reconstruction with Repetitive Elements

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor reconstruction quality in occluded and undersampled regions under sparse-view settings for Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), this paper proposes the first novel view synthesis framework leveraging collaborative optimization of repetitive structures. Our method employs unsupervised instance segmentation to identify repeated elements in the scene, followed by rigid and non-rigid cross-instance registration to achieve geometric alignment. We further design a cross-instance feature fusion mechanism that shares geometric priors across instances while preserving their distinct appearances. By jointly enforcing structural consistency and appearance diversity, our framework significantly improves geometric completeness and texture fidelity in occluded and sparsely observed regions. Extensive experiments on both synthetic and real-world datasets demonstrate consistent improvements in PSNR and SSIM, validating the effectiveness of explicit repetitive-structure modeling for reconstruction under low-coverage observation conditions.

Technology Category

Application Category

📝 Abstract
We leverage repetitive elements in 3D scenes to improve novel view synthesis. Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly improved novel view synthesis but renderings of unseen and occluded parts remain low-quality if the training views are not exhaustive enough. Our key observation is that our environment is often full of repetitive elements. We propose to leverage those repetitions to improve the reconstruction of low-quality parts of the scene due to poor coverage and occlusions. We propose a method that segments each repeated instance in a 3DGS reconstruction, registers them together, and allows information to be shared among instances. Our method improves the geometry while also accounting for appearance variations across instances. We demonstrate our method on a variety of synthetic and real scenes with typical repetitive elements, leading to a substantial improvement in the quality of novel view synthesis.
Problem

Research questions and friction points this paper is trying to address.

Improves novel view synthesis using repetitive elements
Addresses low-quality rendering of unseen and occluded parts
Enhances geometry and appearance in 3DGS reconstructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages repetitive elements for 3D reconstruction
Segments and registers repeated instances in 3DGS
Shares information among instances for improved quality
N
Nicolás Violante
Inria & Université Côte d’Azur, France and Adobe, USA
Andreas Meuleman
Andreas Meuleman
Post-Doctoral Fellow, Inria
visual computing
Alban Gauthier
Alban Gauthier
Research Scientist at Solaya
Computer GraphicsRenderingAppearance ModelingGenerative Models
F
Fr'edo Durand
MIT, USA
Thibault Groueix
Thibault Groueix
META
3D Generative modelsComputer VisionMachine Learning
G
G. Drettakis
Inria & Université Côte d’Azur, France