PointDreamer: Zero-shot 3D Textured Mesh Reconstruction from Colored Point Cloud

📅 2024-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of zero-shot high-fidelity textured mesh reconstruction from colored point clouds. We propose a novel “project–inpaint–reproject” paradigm that, for the first time, bridges 3D point clouds with pretrained 2D diffusion models—without any 3D supervision or additional training. Our method first projects the input point cloud into sparse multi-view images; then leverages diffusion-based inpainting to restore missing or blurry regions; finally reconstructs geometry adaptively and reprojects textures via a Non-Border-First strategy to mitigate inter-view texture boundary inconsistencies. Evaluated on both synthetic and real-world scanned datasets, our approach achieves state-of-the-art performance: LPIPS improves significantly by 30% (from 0.118 to 0.068), yielding sharp, consistent textures and strong robustness against sparse and noisy inputs.

Technology Category

Application Category

📝 Abstract
Reconstructing textured meshes from colored point clouds is an important but challenging task. Most existing methods yield blurry-looking textures or rely on 3D training data that are hard to acquire. Regarding this, we propose PointDreamer, a novel framework for textured mesh reconstruction from colored point cloud via diffusion-based 2D inpainting. Specifically, we first reconstruct an untextured mesh. Next, we project the input point cloud into 2D space to generate sparse multi-view images, and then inpaint empty pixels utilizing a pre-trained 2D diffusion model. After that, we unproject the colors of the inpainted dense images onto the untextured mesh, thus obtaining the final textured mesh. This project-inpaint-unproject pipeline bridges the gap between 3D point clouds and 2D diffusion models for the first time. Thanks to the powerful 2D diffusion model pre-trained on extensive 2D data, PointDreamer reconstructs clear, high-quality textures with high robustness to sparse or noisy input. Also, it's zero-shot requiring no extra training. In addition, we design Non-Border-First unprojection strategy to address the border-area inconsistency issue, which is less explored but commonly-occurred in methods that generate 3D textures from multiview images. Extensive qualitative and quantitative experiments on various synthetic and real-scanned datasets show the SoTA performance of PointDreamer, by significantly outperforming baseline methods with 30% improvement in LPIPS score (from 0.118 to 0.068). Code at: https://github.com/YuQiao0303/PointDreamer.
Problem

Research questions and friction points this paper is trying to address.

3D Reconstruction
Color Consistency
Image Clarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

PointCloud2DImageTransformation
TextureConsistencyEnhancement
QualityImprovementUnderPoorPointCloud
🔎 Similar Papers
No similar papers found.
Q
Qiao Yu
Huazhong University of Science and Technology, Wuhan, China
Xianzhi Li
Xianzhi Li
Huazhong University of Science and Technology
3D visiongeometry processing
Y
Yuan Tang
Huazhong University of Science and Technology, Wuhan, China
X
Xu Han
Huazhong University of Science and Technology, Wuhan, China
J
Jinfeng Xu
Huazhong University of Science and Technology, Wuhan, China
Long Hu
Long Hu
Associate Professor of Computer Science, Huazhong University of Science and Technology
Edge ComputingBig DataAffective ComputingDeep Reinforcement Learning
M
Min Chen
School of Computer Science and Engineering, South China University of Technology, Guangzhou, China, and Pazhou Laboratory, Guangzhou, China