🤖 AI Summary
Existing unpaired point cloud completion methods rely on category-specific training, exhibiting poor generalization. This paper proposes the first unified framework supporting both category-aware and category-agnostic training for unpaired completion, formulating completion as a shape transfer task in latent space. Key contributions include: (1) a retrieval-based local point cloud pair serving as reference guidance; (2) a shared-weight dual-branch network coupled with a Latent Shape Fusion Module (LSFM) to enable cross-category knowledge transfer; and (3) high-fidelity completion achieved solely via latent feature space manipulation—without supervision from complete point clouds. Experiments demonstrate state-of-the-art performance under category-aware settings on both synthetic and real-world datasets, and substantial gains over prior art under category-agnostic settings, highlighting superior generalization capability.
📝 Abstract
The unpaired point cloud completion task aims to complete a partial point cloud by using models trained with no ground truth. Existing unpaired point cloud completion methods are class-aware, i.e., a separate model is needed for each object class. Since they have limited generalization capabilities, these methods perform poorly in real-world scenarios when confronted with a wide range of point clouds of generic 3D objects. In this paper, we propose a novel unpaired point cloud completion framework, namely the Reference-guided Completion (RefComp) framework, which attains strong performance in both the class-aware and class-agnostic training settings. The RefComp framework transforms the unpaired completion problem into a shape translation problem, which is solved in the latent feature space of the partial point clouds. To this end, we introduce the use of partial-complete point cloud pairs, which are retrieved by using the partial point cloud to be completed as a template. These point cloud pairs are used as reference data to guide the completion process. Our RefComp framework uses a reference branch and a target branch with shared parameters for shape fusion and shape translation via a Latent Shape Fusion Module (LSFM) to enhance the structural features along the completion pipeline. Extensive experiments demonstrate that the RefComp framework achieves not only state-of-the-art performance in the class-aware training setting but also competitive results in the class-agnostic training setting on both virtual scans and real-world datasets.