RLCAD: Reinforcement Learning Training Gym for Revolution Involved CAD Command Sequence Generation

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CAD command-sequence generation methods support only 2D sketching, extrusion, and Boolean operations, limiting their ability to model complex geometries involving revolution features. This work introduces the first reinforcement learning (RL) training environment for parametric CAD modeling that natively supports revolution operations, enabling end-to-end, closed-loop generation of fully parameterized modeling commands from B-Rep inputs. We innovatively incorporate revolution as a learnable atomic action within the RL action space and propose a geometric-feedback-driven PPO training paradigm that tightly integrates a CAD kernel, B-Rep difference metrics, state encoding, and action modeling. On the B-Rep-to-command-sequence generation task, our method achieves state-of-the-art performance. Moreover, it improves training efficiency by 39× over prior CAD RL environments, demonstrating significant advances in both expressiveness and scalability for learning-based CAD synthesis.

Technology Category

Application Category

📝 Abstract
A CAD command sequence is a typical parametric design paradigm in 3D CAD systems where a model is constructed by overlaying 2D sketches with operations such as extrusion, revolution, and Boolean operations. Although there is growing academic interest in the automatic generation of command sequences, existing methods and datasets only support operations such as 2D sketching, extrusion,and Boolean operations. This limitation makes it challenging to represent more complex geometries. In this paper, we present a reinforcement learning (RL) training environment (gym) built on a CAD geometric engine. Given an input boundary representation (B-Rep) geometry, the policy network in the RL algorithm generates an action. This action, along with previously generated actions, is processed within the gym to produce the corresponding CAD geometry, which is then fed back into the policy network. The rewards, determined by the difference between the generated and target geometries within the gym, are used to update the RL network. Our method supports operations beyond sketches, Boolean, and extrusion, including revolution operations. With this training gym, we achieve state-of-the-art (SOTA) quality in generating command sequences from B-Rep geometries. In addition, our method can significantly improve the efficiency of command sequence generation by a factor of 39X compared with the previous training gym.
Problem

Research questions and friction points this paper is trying to address.

Extends CAD command sequence generation to include revolution operations
Improves geometric complexity representation in automated CAD design
Enhances efficiency and quality of reinforcement learning-based CAD generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning gym for CAD command generation
Supports revolution operations beyond basic sketching
Achieves 39X efficiency improvement in sequence generation
🔎 Similar Papers
No similar papers found.
Xiaolong Yin
Xiaolong Yin
Zhejiang University, China
X
Xingyu Lu
Zhejiang University, China
J
Jiahang Shen
Zhejiang University, China
J
Jingzhe Ni
Zhejiang University, China
Hailong Li
Hailong Li
Assistant Professor, Department of Radiology, Cincinnati Children's Hospital Medical Center
Machine LearningDeep LearningMedical Image AnalysisData Mining
R
Ruofeng Tong
Zhejiang University, China
M
Min Tang
Zhejiang University, China
P
Peng Du
Zhejiang University, China