PFGS: Pose-Fused 3D Gaussian Splatting for Complete Multi-Pose Object Reconstruction

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D Gaussian Splatting (3DGS) methods rely on a single static-view image, leading to incomplete geometry—particularly in occluded regions. This work proposes a multi-pose image-based framework for complete object reconstruction. Our method introduces: (1) a pose-aware global-local registration fusion strategy that jointly optimizes cross-view geometric consistency and semantic alignment; (2) foundation-model-driven cross-pose feature matching to suppress background interference and reduce memory overhead; and (3) per-view pose refinement coupled with iterative Gaussian representation fusion. Quantitatively, our approach outperforms state-of-the-art methods—including GSplat and Tetra-3D—across multiple benchmarks. Qualitatively, it significantly improves occlusion completion quality and surface fidelity. To our knowledge, this is the first method enabling high-completeness, high-fidelity 3DGS reconstruction from multi-pose inputs.

Technology Category

Application Category

📝 Abstract
Recent advances in 3D Gaussian Splatting (3DGS) have enabled high-quality, real-time novel-view synthesis from multi-view images. However, most existing methods assume the object is captured in a single, static pose, resulting in incomplete reconstructions that miss occluded or self-occluded regions. We introduce PFGS, a pose-aware 3DGS framework that addresses the practical challenge of reconstructing complete objects from multi-pose image captures. Given images of an object in one main pose and several auxiliary poses, PFGS iteratively fuses each auxiliary set into a unified 3DGS representation of the main pose. Our pose-aware fusion strategy combines global and local registration to merge views effectively and refine the 3DGS model. While recent advances in 3D foundation models have improved registration robustness and efficiency, they remain limited by high memory demands and suboptimal accuracy. PFGS overcomes these challenges by incorporating them more intelligently into the registration process: it leverages background features for per-pose camera pose estimation and employs foundation models for cross-pose registration. This design captures the best of both approaches while resolving background inconsistency issues. Experimental results demonstrate that PFGS consistently outperforms strong baselines in both qualitative and quantitative evaluations, producing more complete reconstructions and higher-fidelity 3DGS models.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs complete objects from multi-pose image captures
Fuses auxiliary pose images into unified 3D Gaussian representations
Overcomes limitations of static-pose reconstruction methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pose-aware 3DGS framework for multi-pose reconstruction
Iterative fusion of auxiliary poses into unified representation
Leverages foundation models for cross-pose registration
🔎 Similar Papers
No similar papers found.