🤖 AI Summary
Current CAD systems lack general-purpose intelligent agents capable of understanding and executing design tasks through natural multimodal interaction.
Method: This paper proposes a multimodal CAD assistant architecture centered on a vision-language large model (VLLM) for high-level planning, tightly integrated with domain-specific toolchains—including the FreeCAD Python API—to support joint natural language and image inputs. It employs tool-augmented reasoning, secure Python sandbox execution, and geometric state awareness to generate, iteratively execute, and dynamically verify CAD commands.
Contribution/Results: We introduce the first VLLM-CAD co-design paradigm enabling adaptive, closed-loop editing across diverse tasks. Evaluated on multiple CAD benchmarks, our system successfully performs sketch generation, parametric modeling, and assembly reasoning—demonstrating end-to-end capability in complex, real-world CAD workflows while substantially reducing manual intervention.
📝 Abstract
We propose CAD-Assistant, a general-purpose CAD agent for AI-assisted design. Our approach is based on a powerful Vision and Large Language Model (VLLM) as a planner and a tool-augmentation paradigm using CAD-specific modules. CAD-Assistant addresses multimodal user queries by generating actions that are iteratively executed on a Python interpreter equipped with the FreeCAD software, accessed via its Python API. Our framework is able to assess the impact of generated CAD commands on geometry and adapts subsequent actions based on the evolving state of the CAD design. We consider a wide range of CAD-specific tools including Python libraries, modules of the FreeCAD Python API, helpful routines, rendering functions and other specialized modules. We evaluate our method on multiple CAD benchmarks and qualitatively demonstrate the potential of tool-augmented VLLMs as generic CAD task solvers across diverse CAD workflows.