Artist-Created Mesh Generation from Raw Observation

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world LiDAR or RGB-D scans often yield noisy, incomplete point clouds, making direct generation of artist-style 3D meshes challenging. Method: This paper proposes an end-to-end generative framework that reformulates 3D point cloud completion as a 2D image inpainting task. It comprises lightweight point cloud preprocessing, multi-view depth map encoding, diffusion-based image inpainting, and differentiable rendering coupled with mesh optimization—enabling seamless reconstruction from raw observations to high-fidelity, animation- and texture-ready meshes. Contribution/Results: Departing from conventional multi-stage pipelines, the method demonstrates strong robustness to occlusion, sparsity, and noise on ShapeNet. It significantly enhances modeling practicality on real-world data while improving artistic expressiveness—achieving high-quality, stylized 3D geometry without manual intervention.

Technology Category

Application Category

📝 Abstract
We present an end-to-end framework for generating artist-style meshes from noisy or incomplete point clouds, such as those captured by real-world sensors like LiDAR or mobile RGB-D cameras. Artist-created meshes are crucial for commercial graphics pipelines due to their compatibility with animation and texturing tools and their efficiency in rendering. However, existing approaches often assume clean, complete inputs or rely on complex multi-stage pipelines, limiting their applicability in real-world scenarios. To address this, we propose an end-to-end method that refines the input point cloud and directly produces high-quality, artist-style meshes. At the core of our approach is a novel reformulation of 3D point cloud refinement as a 2D inpainting task, enabling the use of powerful generative models. Preliminary results on the ShapeNet dataset demonstrate the promise of our framework in producing clean, complete meshes.
Problem

Research questions and friction points this paper is trying to address.

Generating artist-style meshes from noisy point clouds
Refining incomplete 3D observations into clean meshes
Enabling direct mesh creation using 2D inpainting techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end framework for artist-style mesh generation
Reformulates 3D refinement as 2D inpainting task
Directly produces clean meshes from noisy inputs
🔎 Similar Papers
No similar papers found.