mrCAD: Multimodal Refinement of Computer-aided Designs

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical limitation of generative AI in iteratively refining 3D CAD models under natural language guidance, proposing the novel task of “language-guided design refinement” to bridge the human–machine editing behavior gap. Methodologically, we introduce the first multimodal dataset for multi-turn CAD human–AI collaboration (6,082 interaction sessions, 15,163 instruction turns), covering text-only, sketch-only, and hybrid instructions; design a human collaborative game–inspired data collection protocol; and establish a joint instruction–execution trajectory annotation scheme with a cross-modal evaluation framework. Key contributions include: (1) the first formal definition and computational modeling of this task; (2) empirical revelation of fundamental modality-composition differences between generative and refinement instructions; and (3) release of the first benchmark supporting multi-turn, multimodal, and goal-aligned CAD collaboration. Experiments show that current vision–language models (VLMs) underperform significantly on refinement versus generation, exposing a critical capability gap.

Technology Category

Application Category

📝 Abstract
A key feature of human collaboration is the ability to iteratively refine the concepts we have communicated. In contrast, while generative AI excels at the extit{generation} of content, it often struggles to make specific language-guided extit{modifications} of its prior outputs. To bridge the gap between how humans and machines perform edits, we present mrCAD, a dataset of multimodal instructions in a communication game. In each game, players created computer aided designs (CADs) and refined them over several rounds to match specific target designs. Only one player, the Designer, could see the target, and they must instruct the other player, the Maker, using text, drawing, or a combination of modalities. mrCAD consists of 6,082 communication games, 15,163 instruction-execution rounds, played between 1,092 pairs of human players. We analyze the dataset and find that generation and refinement instructions differ in their composition of drawing and text. Using the mrCAD task as a benchmark, we find that state-of-the-art VLMs are better at following generation instructions than refinement instructions. These results lay a foundation for analyzing and modeling a multimodal language of refinement that is not represented in previous datasets.
Problem

Research questions and friction points this paper is trying to address.

Bridging human-machine gap in iterative design refinement
Analyzing multimodal instructions for CAD modifications
Benchmarking VLMs on generation vs refinement tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset for CAD refinement
Human-machine communication game analysis
VLMs benchmark for refinement instructions
🔎 Similar Papers
No similar papers found.