🤖 AI Summary
Existing CAD modeling relies heavily on manual intervention and domain expertise, and translating natural language specifications into executable CAD code faces challenges including weak logical reasoning, frequent syntactic errors, and low geometric fidelity. This paper proposes a multimodal chain-of-thought-guided reinforcement learning framework. To address training instability under sparse rewards, we innovatively introduce trust-region stretching, precision-aware loss weighting, and excessive-length filtering. The method synergistically integrates large language models, multimodal reasoning, and domain-specific optimization. Evaluated on our newly constructed ExeCAD dataset (16,540 instances), our approach achieves significant improvements over state-of-the-art vision-language models in executable code rate, geometric error control, and reasoning coherence. These advances collectively bridge the gap toward practical, natural-language-driven CAD automation.
📝 Abstract
Computer-Aided Design (CAD) plays a vital role in engineering and manufacturing, yet current CAD workflows require extensive domain expertise and manual modeling effort. Recent advances in large language models (LLMs) have made it possible to generate code from natural language, opening new opportunities for automating parametric 3D modeling. However, directly translating human design intent into executable CAD code remains highly challenging, due to the need for logical reasoning, syntactic correctness, and numerical precision. In this work, we propose CAD-RL, a multimodal Chain-of-Thought (CoT) guided reinforcement learning post training framework for CAD modeling code generation. Our method combines CoT-based Cold Start with goal-driven reinforcement learning post training using three task-specific rewards: executability reward, geometric accuracy reward, and external evaluation reward. To ensure stable policy learning under sparse and high-variance reward conditions, we introduce three targeted optimization strategies: Trust Region Stretch for improved exploration, Precision Token Loss for enhanced dimensions parameter accuracy, and Overlong Filtering to reduce noisy supervision. To support training and benchmarking, we release ExeCAD, a noval dataset comprising 16,540 real-world CAD examples with paired natural language and structured design language descriptions, executable CADQuery scripts, and rendered 3D models. Experiments demonstrate that CAD-RL achieves significant improvements in reasoning quality, output precision, and code executability over existing VLMs.