PCDreamer: Point Cloud Completion Through Multi-view Diffusion Priors

📅 2024-11-28
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional point cloud completion methods suffer from limited local feature representation and a vast solution space, hindering accurate recovery of complex topological structures; meanwhile, multimodal approaches relying on paired image–point cloud data face practical acquisition challenges. This paper proposes PCDreamer, the first method to leverage cross-view geometric priors from multi-view diffusion models—enabling consistent novel-view image generation for target shapes without requiring real paired images. Our architecture features two core modules: (i) Shape Fusion, which jointly encodes and aligns point cloud and image features across modalities; and (ii) Shape Consolidation, which employs learnable point filtering to suppress noise induced by view inconsistency. Evaluated on multiple benchmarks, PCDreamer achieves state-of-the-art performance, significantly enhancing fine-grained structural reconstruction—particularly under sparse missing regions and high topological complexity, demonstrating superior robustness.

Technology Category

Application Category

📝 Abstract
This paper presents PCDreamer, a novel method for point cloud completion. Traditional methods typically extract features from partial point clouds to predict missing regions, but the large solution space often leads to unsatisfactory results. More recent approaches have started to use images as extra guidance, effectively improving performance, but obtaining paired data of images and partial point clouds is challenging in practice. To overcome these limitations, we harness the relatively view-consistent multi-view diffusion priors within large models, to generate novel views of the desired shape. The resulting image set encodes both global and local shape cues, which is especially beneficial for shape completion. To fully exploit the priors, we have designed a shape fusion module for producing an initial complete shape from multi-modality input (ie, images and point clouds), and a follow-up shape consolidation module to obtain the final complete shape by discarding unreliable points introduced by the inconsistency from diffusion priors. Extensive experimental results demonstrate our superior performance, especially in recovering fine details.
Problem

Research questions and friction points this paper is trying to address.

Overcomes limitations in point cloud completion using multi-view diffusion priors.
Addresses challenges in obtaining paired image and point cloud data.
Improves recovery of fine details in shape completion tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-view diffusion priors for shape completion
Integrates images and point clouds via fusion module
Refines shape by discarding unreliable diffusion points
🔎 Similar Papers
No similar papers found.