🤖 AI Summary
Point cloud completion faces significant challenges due to severe occlusion, leading to structural incompleteness and topological inconsistency. To address this, we propose a novel “completion-as-correction” paradigm that departs from conventional inpainting-based generative completion. Instead, we leverage pre-trained image-to-3D generative models to provide topology-complete priors and perform progressive structural correction of local observations in feature space. Our method comprises a multi-stage dual-encoder architecture, a hierarchical feature correction module, and a cross-modal RGB-point cloud alignment mechanism. On ShapeNetViPC, it achieves a 23.5% reduction in Chamfer Distance and a 7.1% improvement in F-score over state-of-the-art methods. This work marks the first paradigm shift from unconstrained generation to topology-aware refinement, significantly enhancing both topological plausibility and geometric fidelity of completed point clouds.
📝 Abstract
Point cloud completion aims to reconstruct complete 3D shapes from partial observations, which is a challenging problem due to severe occlusions and missing geometry. Despite recent advances in multimodal techniques that leverage complementary RGB images to compensate for missing geometry, most methods still follow a Completion-by-Inpainting paradigm, synthesizing missing structures from fused latent features. We empirically show that this paradigm often results in structural inconsistencies and topological artifacts due to limited geometric and semantic constraints. To address this, we rethink the task and propose a more robust paradigm, termed Completion-by-Correction, which begins with a topologically complete shape prior generated by a pretrained image-to-3D model and performs feature-space correction to align it with the partial observation. This paradigm shifts completion from unconstrained synthesis to guided refinement, enabling structurally consistent and observation-aligned reconstruction. Building upon this paradigm, we introduce PGNet, a multi-stage framework that conducts dual-feature encoding to ground the generative prior, synthesizes a coarse yet structurally aligned scaffold, and progressively refines geometric details via hierarchical correction. Experiments on the ShapeNetViPC dataset demonstrate the superiority of PGNet over state-of-the-art baselines in terms of average Chamfer Distance (-23.5%) and F-score (+7.1%).