🤖 AI Summary
To address the slow optimization and poor geometric quality—particularly in textureless regions—of volumetric rendering methods for large-scale indoor 3D surface reconstruction, this paper proposes a learning-based framework for Gaussian initial density generation and dynamic optimization. Our key contributions are: (1) the first data-driven Gaussian initialization method, replacing manual initialization; (2) a differentiable densification network that jointly predicts new Gaussian parameters and gradient-based updates, eliminating heuristic rules; and (3) a unified pipeline integrating 2D Gaussian splatting, neural rendering, and rendering-gradient-guided learning. Evaluated on standard indoor datasets, our method achieves an 8× speedup over state-of-the-art approaches while reducing depth error by up to 48%. It significantly improves geometric fidelity—especially for planar structures such as walls—demonstrating superior reconstruction accuracy and efficiency.
📝 Abstract
Surface reconstruction is fundamental to computer vision and graphics, enabling applications in 3D modeling, mixed reality, robotics, and more. Existing approaches based on volumetric rendering obtain promising results, but optimize on a per-scene basis, resulting in a slow optimization that can struggle to model under-observed or textureless regions. We introduce QuickSplat, which learns data-driven priors to generate dense initializations for 2D gaussian splatting optimization of large-scale indoor scenes. This provides a strong starting point for the reconstruction, which accelerates the convergence of the optimization and improves the geometry of flat wall structures. We further learn to jointly estimate the densification and update of the scene parameters during each iteration; our proposed densifier network predicts new Gaussians based on the rendering gradients of existing ones, removing the needs of heuristics for densification. Extensive experiments on large-scale indoor scene reconstruction demonstrate the superiority of our data-driven optimization. Concretely, we accelerate runtime by 8x, while decreasing depth errors by up to 48% in comparison to state of the art methods.