Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization

📅 2024-07-19
🏛️ arXiv.org
📈 Citations: 15
Influential: 0
📄 PDF
🤖 AI Summary
Image-to-CAD reverse engineering is highly challenging due to the representational gap between CAD’s procedural structure—comprising discrete commands and continuous geometric parameters—and the perceptual uncertainty inherent in images (e.g., lighting variations, noise). To address this, we propose a decoupled two-stage paradigm: first predicting the semantic command sequence, then regressing precise geometric parameters. We introduce the first Vision-Language Model (VLM)-guided conditional decomposition mechanism—leveraging GPT-4V—to steer structural parsing and design TrAssembler, a novel architecture enabling accurate mapping from discrete commands to continuous attributes. Furthermore, we construct ShapeNet-CAD, the first fine-grained, CAD-oriented annotated dataset for reverse engineering. Our method is the first to jointly generate editable CAD structures and parameters end-to-end from real-world images, achieving state-of-the-art geometric accuracy and editability. This work establishes a new industrial-grade paradigm for image-to-CAD modeling.

Technology Category

Application Category

📝 Abstract
Reverse engineering 3D computer-aided design (CAD) models from images is an important task for many downstream applications including interactive editing, manufacturing, architecture, robotics, etc. The difficulty of the task lies in vast representational disparities between the CAD output and the image input. CAD models are precise, programmatic constructs that involves sequential operations combining discrete command structure with continuous attributes -- making it challenging to learn and optimize in an end-to-end fashion. Concurrently, input images introduce inherent challenges such as photo-metric variability and sensor noise, complicating the reverse engineering process. In this work, we introduce a novel approach that conditionally factorizes the task into two sub-problems. First, we leverage large foundation models, particularly GPT-4V, to predict the global discrete base structure with semantic information. Second, we propose TrAssembler that conditioned on the discrete structure with semantics predicts the continuous attribute values. To support the training of our TrAssembler, we further constructed an annotated CAD dataset of common objects from ShapeNet. Putting all together, our approach and data demonstrate significant first steps towards CAD-ifying images in the wild. Our project page: https://anonymous123342.github.io/
Problem

Research questions and friction points this paper is trying to address.

Reverse engineering 3D CAD models from images
Bridging representational disparities between CAD and images
Overcoming photometric variability and sensor noise in images
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLM-assisted conditional factorization
Llama3.2 predicts discrete structure
TrAssembler predicts continuous attributes
🔎 Similar Papers
No similar papers found.
Yang You
Yang You
Postdoc, Stanford University
3D visioncomputer graphicscomputational geometry
M
M. Uy
Stanford University
J
Jiaqi Han
Stanford University
R
R. Thomas
Stanford University
H
Haotong Zhang
Peking University
Suya You
Suya You
USC
Computer VisionMachine LearningComputer GraphicsHuman Computer InteractionData Visualization
L
Leonidas J. Guibas
Stanford University