SPFSplatV2: Efficient Self-Supervised Pose-Free 3D Gaussian Splatting from Sparse Views

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses 3D Gaussian Splatting reconstruction from sparse multi-view images without ground-truth camera pose supervision. We propose a self-supervised end-to-end framework that jointly optimizes 3D Gaussian primitives and camera poses via a shared-weight feature backbone; introduces a mask-guided attention mechanism to enhance cross-view feature alignment; and designs reprojection loss and pixel-level alignment constraints to enable geometric and pose co-estimation in canonical space. The method requires no pose labels, is compatible with diverse reconstruction architectures, and significantly improves cross-domain generalization and data scalability. It achieves state-of-the-art performance on both in-domain and out-of-domain novel view synthesis, particularly excelling under extreme viewpoint variations and low image overlap—outperforming existing geometry-supervised approaches by a notable margin.

Technology Category

Application Category

📝 Abstract
We introduce SPFSplatV2, an efficient feed-forward framework for 3D Gaussian splatting from sparse multi-view images, requiring no ground-truth poses during training and inference. It employs a shared feature extraction backbone, enabling simultaneous prediction of 3D Gaussian primitives and camera poses in a canonical space from unposed inputs. A masked attention mechanism is introduced to efficiently estimate target poses during training, while a reprojection loss enforces pixel-aligned Gaussian primitives, providing stronger geometric constraints. We further demonstrate the compatibility of our training framework with different reconstruction architectures, resulting in two model variants. Remarkably, despite the absence of pose supervision, our method achieves state-of-the-art performance in both in-domain and out-of-domain novel view synthesis, even under extreme viewpoint changes and limited image overlap, and surpasses recent methods that rely on geometric supervision for relative pose estimation. By eliminating dependence on ground-truth poses, our method offers the scalability to leverage larger and more diverse datasets. Code and pretrained models will be available on our project page: https://ranrhuang.github.io/spfsplatv2/.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D scenes from sparse multi-view images without pose supervision
Simultaneously predicting 3D Gaussian primitives and camera poses from unposed inputs
Achieving state-of-the-art novel view synthesis under extreme viewpoint changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feed-forward framework predicting 3D Gaussians and camera poses simultaneously
Masked attention mechanism for efficient target pose estimation
Reprojection loss enforcing pixel-aligned Gaussian primitives with geometric constraints
🔎 Similar Papers
No similar papers found.