🤖 AI Summary
Feedforward 3D Gaussian Splatting (3DGS) suffers from coarse primitive localization, low efficiency, and severe rendering artifacts due to its reliance on fixed-pixel grids.
Method: We propose a sub-pixel–level adaptive primitive detection architecture. Our approach introduces a keypoint-inspired multi-resolution decoder that enables pose-agnostic, end-to-end self-supervised learning of sparse Gaussian primitive distributions. We integrate multi-scale feature decoding, self-supervised 3D reconstruction, and differentiable rasterization. Notably, we empirically discover that Gaussian rendering optimization concurrently improves camera pose estimation accuracy.
Results: Our method achieves state-of-the-art performance in real-time feedforward 3DGS: novel-view synthesis completes in seconds; primitive count reduces by over 50%; rendering artifacts are significantly suppressed; and geometric detail fidelity is markedly enhanced—demonstrating superior efficiency, accuracy, and visual quality.
📝 Abstract
Feed-forward 3D Gaussian Splatting (3DGS) models enable real-time scene generation but are hindered by suboptimal pixel-aligned primitive placement, which relies on a dense, rigid grid and limits both quality and efficiency. We introduce a new feed-forward architecture that detects 3D Gaussian primitives at a sub-pixel level, replacing the pixel grid with an adaptive, "Off The Grid" distribution. Inspired by keypoint detection, our multi-resolution decoder learns to distribute primitives across image patches. This module is trained end-to-end with a 3D reconstruction backbone using self-supervised learning. Our resulting pose-free model generates photorealistic scenes in seconds, achieving state-of-the-art novel view synthesis for feed-forward models. It outperforms competitors while using far fewer primitives, demonstrating a more accurate and efficient allocation that captures fine details and reduces artifacts. Moreover, we observe that by learning to render 3D Gaussians, our 3D reconstruction backbone improves camera pose estimation, suggesting opportunities to train these foundational models without labels.