Training-Free Instance-Aware 3D Scene Reconstruction and Diffusion-Based View Synthesis from Sparse Images

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-fidelity, instance-aware 3D indoor scene reconstruction and novel view synthesis from sparse, pose-free RGB images by proposing a training-free reconstruction and rendering system. The method integrates point cloud reconstruction, propagation of 2D semantic segmentation, warp-guided outlier geometry removal, and an instance-enhancement mechanism, further leveraging a 3D-aware diffusion model for novel view rendering. Without requiring scene-specific optimization or retraining of models, the system enables high-quality 3D reconstruction, photorealistic view synthesis, and instance-level editing—such as object removal—thereby significantly improving geometric completeness, semantic consistency, and editability under sparse input conditions.

Technology Category

Application Category

📝 Abstract
We introduce a novel, training-free system for reconstructing, understanding, and rendering 3D indoor scenes from a sparse set of unposed RGB images. Unlike traditional radiance field approaches that require dense views and per-scene optimization, our pipeline achieves high-fidelity results without any training or pose preprocessing. The system integrates three key innovations: (1) A robust point cloud reconstruction module that filters unreliable geometry using a warping-based anomaly removal strategy; (2) A warping-guided 2D-to-3D instance lifting mechanism that propagates 2D segmentation masks into a consistent, instance-aware 3D representation; and (3) A novel rendering approach that projects the point cloud into new views and refines the renderings with a 3D-aware diffusion model. Our method leverages the generative power of diffusion to compensate for missing geometry and enhances realism, especially under sparse input conditions. We further demonstrate that object-level scene editing such as instance removal can be naturally supported in our pipeline by modifying only the point cloud, enabling the synthesis of consistent, edited views without retraining. Our results establish a new direction for efficient, editable 3D content generation without relying on scene-specific optimization. Project page: https://jiatongxia.github.io/TID3R/
Problem

Research questions and friction points this paper is trying to address.

3D scene reconstruction
instance-aware
sparse images
view synthesis
training-free
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
instance-aware 3D reconstruction
diffusion-based view synthesis
sparse view reconstruction
3D scene editing
🔎 Similar Papers
No similar papers found.