Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

📅 2024-12-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of reconstructing high-frequency details in sparse-view settings using feedforward Gaussian models—where limited Gaussian primitives hinder fidelity—this paper proposes a generative densification method. In a single forward pass, it leverages an implicit geometric prior encoded in a pre-trained model to drive a conditional Gaussian parameter generation network via feature-space upsampling, directly outputting dense, high-precision 3D Gaussians. This work is the first to introduce generative feature upsampling for Gaussian densification, eliminating the iterative splitting and cloning strategies employed by 3D-GS and thereby significantly improving generalization and detail fidelity. Extensive experiments on both object-level and scene-level reconstruction tasks demonstrate consistent superiority over state-of-the-art methods across PSNR, SSIM, and LPIPS metrics, while maintaining comparable or smaller model size.

Technology Category

Application Category

📝 Abstract
Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.
Problem

Research questions and friction points this paper is trying to address.

Improves high-frequency detail representation in 3D reconstruction
Enhances generalization of Gaussian densification in feed-forward models
Achieves high-fidelity 3D reconstruction with smaller model sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Densification for high-fidelity 3D reconstruction
Up-samples feature representations in single forward pass
Leverages prior knowledge for enhanced generalization
🔎 Similar Papers
No similar papers found.