CADDreamer: CAD object Generation from Single-view Images

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D generative models struggle to produce CAD models that meet engineering requirements—namely, compactness, sharp edges, topological closure, and structured boundary representation (B-rep). This work introduces CADDreamer, the first end-to-end framework for generating semantically labeled B-rep CAD models from a single input image. Its core contributions are threefold: (1) a novel primitive-aware representation encoding CAD primitives as color channels, enabling explicit semantic grounding; (2) a multi-view diffusion model that jointly infers normal maps and semantic segmentation maps; and (3) a primitive-level geometric optimization module coupled with a topology-preserving B-rep extraction mechanism. Quantitative and qualitative evaluations demonstrate that CADDreamer generates structurally coherent, edge-sharp, and watertight CAD models, significantly outperforming state-of-the-art methods on single-image CAD reconstruction.

Technology Category

Application Category

📝 Abstract
Diffusion-based 3D generation has made remarkable progress in recent years. However, existing 3D generative models often produce overly dense and unstructured meshes, which stand in stark contrast to the compact, structured, and sharply-edged Computer-Aided Design (CAD) models crafted by human designers. To address this gap, we introduce CADDreamer, a novel approach for generating boundary representations (B-rep) of CAD objects from a single image. CADDreamer employs a primitive-aware multi-view diffusion model that captures both local geometric details and high-level structural semantics during the generation process. By encoding primitive semantics into the color domain, the method leverages the strong priors of pre-trained diffusion models to align with well-defined primitives. This enables the inference of multi-view normal maps and semantic maps from a single image, facilitating the reconstruction of a mesh with primitive labels. Furthermore, we introduce geometric optimization techniques and topology-preserving extraction methods to mitigate noise and distortion in the generated primitives. These enhancements result in a complete and seamless B-rep of the CAD model. Experimental results demonstrate that our method effectively recovers high-quality CAD objects from single-view images. Compared to existing 3D generation techniques, the B-rep models produced by CADDreamer are compact in representation, clear in structure, sharp in edges, and watertight in topology.
Problem

Research questions and friction points this paper is trying to address.

Generates structured CAD models from single-view images
Improves 3D generation with compact, sharp-edged meshes
Uses diffusion models for high-quality CAD object recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Primitive-aware multi-view diffusion model
Geometric optimization and topology-preserving extraction
Single-image to B-rep CAD object generation
🔎 Similar Papers
No similar papers found.
Y
Yuan Li
University of Texas at Dallas
C
Cheng Lin
The University of Hong Kong
Y
Yuan Liu
Hong Kong University of Science and Technology
X
Xiaoxiao Long
Nanjing University
Chenxu Zhang
Chenxu Zhang
ByteDance Inc.
Computer GraphicsComputer VisionAI
Ningna Wang
Ningna Wang
Columbia University
computer graphics
X
Xin Li
Texas A&M University
Wenping Wang
Wenping Wang
Texas A&M University
Computer GraphicsGeometric Computing
Xiaohu Guo
Xiaohu Guo
University of Texas at Dallas
Computer GraphicsComputer VisionGeometric Computing