🤖 AI Summary
To address the challenge of reconstructing high-frequency details in sparse-view settings using feedforward Gaussian models—where limited Gaussian primitives hinder fidelity—this paper proposes a generative densification method. In a single forward pass, it leverages an implicit geometric prior encoded in a pre-trained model to drive a conditional Gaussian parameter generation network via feature-space upsampling, directly outputting dense, high-precision 3D Gaussians. This work is the first to introduce generative feature upsampling for Gaussian densification, eliminating the iterative splitting and cloning strategies employed by 3D-GS and thereby significantly improving generalization and detail fidelity. Extensive experiments on both object-level and scene-level reconstruction tasks demonstrate consistent superiority over state-of-the-art methods across PSNR, SSIM, and LPIPS metrics, while maintaining comparable or smaller model size.
📝 Abstract
Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.