🤖 AI Summary
Existing large language models (LLMs) suffer from pervasive syntax errors, functional hallucinations, and poor intent alignment in RTL code generation. To address these issues, we propose RLVR—an entropy-guided, verifiable reinforcement learning framework. RLVR dynamically identifies high-entropy critical syntactic tokens (e.g., module declarations, control-flow keywords) and applies selective gradient updates exclusively to them, while integrating formal verification-based reward signals to enable precise policy optimization. This approach significantly enhances the model’s capacity to capture RTL structural semantics and design intent. Evaluated on VerilogEval and RTLLM benchmarks, RLVR achieves up to a 14.7% absolute improvement in functional pass rate, improves training stability, and reduces redundant parameter updates by 32.5%. By unifying syntactic awareness with formal verification, RLVR establishes a more reliable and rigorously verifiable paradigm for hardware design automation.
📝 Abstract
Recent advances in large language models (LLMs) have demonstrated significant potential in hardware design automation, particularly in using natural language to synthesize Register-Transfer Level (RTL) code. Despite this progress, a gap remains between model capability and the demands of real-world RTL design, including syntax errors, functional hallucinations, and weak alignment to designer intent. Reinforcement Learning with Verifiable Rewards (RLVR) offers a promising approach to bridge this gap, as hardware provides executable and formally checkable signals that can be used to further align model outputs with design intent. However, in long, structured RTL code sequences, not all tokens contribute equally to functional correctness, and naïvely spreading gradients across all tokens dilutes learning signals. A key insight from our entropy analysis in RTL generation is that only a small fraction of tokens (e.g., always, if, assign, posedge) exhibit high uncertainty and largely influence control flow and module structure. To address these challenges, we present EARL, an Entropy-Aware Reinforcement Learning framework for Verilog generation. EARL performs policy optimization using verifiable reward signals and introduces entropy-guided selective updates that gate policy gradients to high-entropy tokens. This approach preserves training stability and concentrates gradient updates on functionally important regions of code. Our experiments on VerilogEval and RTLLM show that EARL improves functional pass rates over prior LLM baselines by up to 14.7%, while reducing unnecessary updates and improving training stability. These results indicate that focusing RL on critical, high-uncertainty tokens enables more reliable and targeted policy improvement for structured RTL code generation.