🤖 AI Summary
To address the incompleteness of single-view point clouds from real-world scans caused by occlusion, this paper proposes a self-supervised 3D reconstruction method that requires no complete ground-truth annotations. The method introduces three key contributions: (1) a novel pattern retrieval mechanism that jointly leverages region-level and category-level geometric similarity to strengthen prior modeling of missing regions; (2) a density-aware anisotropic radius estimation strategy to improve implicit surface rendering; and (3) the first multi-view adversarial learning framework grounded in single-view depth maps, augmented with self-supervised geometric consistency constraints to enhance reconstruction robustness. Evaluated on multiple benchmarks, our approach significantly outperforms existing self-supervised methods and achieves performance competitive with certain unpaired supervised approaches. The source code is publicly available.
📝 Abstract
In real-world scenarios, scanned point clouds are often incomplete due to occlusion issues. The tasks of self-supervised and weakly-supervised point cloud completion involve reconstructing missing regions of these incomplete objects without the supervision of complete ground truth. Current methods either rely on multiple views of partial observations for supervision or overlook the intrinsic geometric similarity that can be identified and utilized from the given partial point clouds. In this paper, we propose MAL-UPC, a framework that effectively leverages both region-level and category-specific geometric similarities to complete missing structures. Our MAL-UPC does not require any 3D complete supervision and only necessitates single-view partial observations in the training set. Specifically, we first introduce a Pattern Retrieval Network to retrieve similar position and curvature patterns between the partial input and the predicted shape, then leverage these similarities to densify and refine the reconstructed results. Additionally, we render the reconstructed complete shape into multi-view depth maps and design an adversarial learning module to learn the geometry of the target shape from category-specific single-view depth images of the partial point clouds in the training set. To achieve anisotropic rendering, we design a density-aware radius estimation algorithm to improve the quality of the rendered images. Our MAL-UPC outperforms current state-of-the-art self-supervised methods and even some unpaired approaches. We will make the source code publicly available at https://github.com/ltwu6/malspc