🤖 AI Summary
Image-to-CAD reverse engineering is highly challenging due to the representational gap between CAD’s procedural structure—comprising discrete commands and continuous geometric parameters—and the perceptual uncertainty inherent in images (e.g., lighting variations, noise). To address this, we propose a decoupled two-stage paradigm: first predicting the semantic command sequence, then regressing precise geometric parameters. We introduce the first Vision-Language Model (VLM)-guided conditional decomposition mechanism—leveraging GPT-4V—to steer structural parsing and design TrAssembler, a novel architecture enabling accurate mapping from discrete commands to continuous attributes. Furthermore, we construct ShapeNet-CAD, the first fine-grained, CAD-oriented annotated dataset for reverse engineering. Our method is the first to jointly generate editable CAD structures and parameters end-to-end from real-world images, achieving state-of-the-art geometric accuracy and editability. This work establishes a new industrial-grade paradigm for image-to-CAD modeling.
📝 Abstract
Reverse engineering 3D computer-aided design (CAD) models from images is an important task for many downstream applications including interactive editing, manufacturing, architecture, robotics, etc. The difficulty of the task lies in vast representational disparities between the CAD output and the image input. CAD models are precise, programmatic constructs that involves sequential operations combining discrete command structure with continuous attributes -- making it challenging to learn and optimize in an end-to-end fashion. Concurrently, input images introduce inherent challenges such as photo-metric variability and sensor noise, complicating the reverse engineering process. In this work, we introduce a novel approach that conditionally factorizes the task into two sub-problems. First, we leverage large foundation models, particularly GPT-4V, to predict the global discrete base structure with semantic information. Second, we propose TrAssembler that conditioned on the discrete structure with semantics predicts the continuous attribute values. To support the training of our TrAssembler, we further constructed an annotated CAD dataset of common objects from ShapeNet. Putting all together, our approach and data demonstrate significant first steps towards CAD-ifying images in the wild. Our project page: https://anonymous123342.github.io/