cadrille: Multi-modal CAD Reconstruction with Online Reinforcement Learning

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CAD reconstruction methods are largely limited to single-modal inputs (e.g., point clouds, images, or text), resulting in insufficient generalization and robustness across modalities. This paper introduces the first multimodal CAD reconstruction framework tailored for engineering design, unifying point cloud, image, and textual inputs within a single architecture. Our approach features two key innovations: (1) the first integration of online reinforcement learning—specifically Generalized Reinforcement Learning from Preference Optimization (GRPO)—into CAD reconstruction, establishing a two-stage training paradigm comprising supervised fine-tuning (SFT) followed by online RL; and (2) synergistic fusion of vision-language models (VLMs), programmatic synthetic data supervision, multimodal encoding, and executable CAD program generation. On the DeepCAD benchmark, our method comprehensively outperforms unimodal baselines. After RL fine-tuning, it achieves new state-of-the-art performance on three challenging real-world datasets, demonstrating significantly enhanced cross-modal robustness and generalization capability.

Technology Category

Application Category

📝 Abstract
Computer-Aided Design (CAD) plays a central role in engineering and manufacturing, making it possible to create precise and editable 3D models. Using a variety of sensor or user-provided data as inputs for CAD reconstruction can democratize access to design applications. However, existing methods typically focus on a single input modality, such as point clouds, images, or text, which limits their generalizability and robustness. Leveraging recent advances in vision-language models (VLM), we propose a multi-modal CAD reconstruction model that simultaneously processes all three input modalities. Inspired by large language model (LLM) training paradigms, we adopt a two-stage pipeline: supervised fine-tuning (SFT) on large-scale procedurally generated data, followed by reinforcement learning (RL) fine-tuning using online feedback, obtained programatically. Furthermore, we are the first to explore RL fine-tuning of LLMs for CAD tasks demonstrating that online RL algorithms such as Group Relative Preference Optimization (GRPO) outperform offline alternatives. In the DeepCAD benchmark, our SFT model outperforms existing single-modal approaches in all three input modalities simultaneously. More importantly, after RL fine-tuning, cadrille sets new state-of-the-art on three challenging datasets, including a real-world one.
Problem

Research questions and friction points this paper is trying to address.

Multi-modal CAD reconstruction from diverse inputs
Overcoming single-modality limitations in CAD models
Enhancing CAD performance with online RL fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal CAD model processes point clouds, images, text
Two-stage pipeline: supervised then reinforcement learning fine-tuning
Online RL algorithms outperform offline methods in CAD
🔎 Similar Papers
No similar papers found.