🤖 AI Summary
This work addresses three core challenges in natural language (NL)–to–Verilog code generation: the absence of a verifiable training environment, scarcity of high-quality NL–code parallel data, and prohibitively high computational cost of RL-based Verilog generation (RLVR). Methodologically, we introduce (i) a rule-driven test-bench generator enabling automated equivalence checking; (ii) a code–language–code closed-loop data synthesis framework coupled with bidirectional knowledge distillation to alleviate data scarcity; and (iii) DAPO—a novel adaptive-sampling reinforcement learning algorithm—within a two-stage *distill-then-RL* training paradigm. Our CodeV-R1-7B model achieves 68.6% and 72.9% pass@1 on VerilogEval v2 and RTLLM v1.1, respectively, surpassing prior SOTA by 12–20% and matching the performance of 671B-scale models. This significantly advances reliability and efficiency in NL-to-HDL synthesis for EDA applications.
📝 Abstract
Large language models (LLMs) trained via reinforcement learning with verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit, automatable verification, such as software programming and mathematical problems. Extending RLVR to electronic design automation (EDA), especially automatically generating hardware description languages (HDLs) like Verilog from natural-language (NL) specifications, however, poses three key challenges: the lack of automated and accurate verification environments, the scarcity of high-quality NL-code pairs, and the prohibitive computation cost of RLVR. To this end, we introduce CodeV-R1, an RLVR framework for training Verilog generation LLMs. First, we develop a rule-based testbench generator that performs robust equivalence checking against golden references. Second, we propose a round-trip data synthesis method that pairs open-source Verilog snippets with LLM-generated NL descriptions, verifies code-NL-code consistency via the generated testbench, and filters out inequivalent examples to yield a high-quality dataset. Third, we employ a two-stage"distill-then-RL"training pipeline: distillation for the cold start of reasoning abilities, followed by adaptive DAPO, our novel RLVR algorithm that can reduce training cost by adaptively adjusting sampling rate. The resulting model, CodeV-R1-7B, achieves 68.6% and 72.9% pass@1 on VerilogEval v2 and RTLLM v1.1, respectively, surpassing prior state-of-the-art by 12~20%, while matching or even exceeding the performance of 671B DeepSeek-R1. We will release our model, training pipeline, and dataset to facilitate research in EDA and LLM communities.