Unsupervised 3D Point Cloud Completion via Multi-view Adversarial Learning

📅 2024-07-13
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
To address the incompleteness of single-view point clouds from real-world scans caused by occlusion, this paper proposes a self-supervised 3D reconstruction method that requires no complete ground-truth annotations. The method introduces three key contributions: (1) a novel pattern retrieval mechanism that jointly leverages region-level and category-level geometric similarity to strengthen prior modeling of missing regions; (2) a density-aware anisotropic radius estimation strategy to improve implicit surface rendering; and (3) the first multi-view adversarial learning framework grounded in single-view depth maps, augmented with self-supervised geometric consistency constraints to enhance reconstruction robustness. Evaluated on multiple benchmarks, our approach significantly outperforms existing self-supervised methods and achieves performance competitive with certain unpaired supervised approaches. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
In real-world scenarios, scanned point clouds are often incomplete due to occlusion issues. The tasks of self-supervised and weakly-supervised point cloud completion involve reconstructing missing regions of these incomplete objects without the supervision of complete ground truth. Current methods either rely on multiple views of partial observations for supervision or overlook the intrinsic geometric similarity that can be identified and utilized from the given partial point clouds. In this paper, we propose MAL-UPC, a framework that effectively leverages both region-level and category-specific geometric similarities to complete missing structures. Our MAL-UPC does not require any 3D complete supervision and only necessitates single-view partial observations in the training set. Specifically, we first introduce a Pattern Retrieval Network to retrieve similar position and curvature patterns between the partial input and the predicted shape, then leverage these similarities to densify and refine the reconstructed results. Additionally, we render the reconstructed complete shape into multi-view depth maps and design an adversarial learning module to learn the geometry of the target shape from category-specific single-view depth images of the partial point clouds in the training set. To achieve anisotropic rendering, we design a density-aware radius estimation algorithm to improve the quality of the rendered images. Our MAL-UPC outperforms current state-of-the-art self-supervised methods and even some unpaired approaches. We will make the source code publicly available at https://github.com/ltwu6/malspc
Problem

Research questions and friction points this paper is trying to address.

Reconstructs incomplete 3D point clouds without complete ground truth supervision
Leverages geometric similarities from partial point clouds for completion
Uses adversarial learning with multi-view depth maps for shape refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-view adversarial learning for point cloud completion
Pattern Retrieval Network for geometric similarity
Density-aware radius estimation for anisotropic rendering
L
Lintai Wu
Department of Computer Science, City University of Hong Kong, Hong Kong SAR, and also with the Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, Guangdong, China
X
Xianjing Cheng
School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Shenzhen 518055, Guangdong, China
Junhui Hou
Junhui Hou
Department of Computer Science, City University of Hong Kong
Neural Spatial Computing
Y
Yong Xu
Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen
Huanqiang Zeng
Huanqiang Zeng
Huaqiao University, China
Image ProcessingVideo CodingComputer Vision