Synthesis-in-the-Loop Evaluation of LLMs for RTL Generation: Quality, Reliability, and Failure Modes

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical gap in existing large language models for RTL generation, which typically evaluate only functional correctness while neglecting synthesizability and hardware implementation quality. We propose the first comprehensive closed-loop evaluation framework for RTL generation, automatically synthesizing outputs from 32 models across 202 Verilog tasks using the Nangate45 process technology. To quantify hardware quality, we introduce the Hardware Quality Index (HQI), which integrates metrics such as area, delay, and synthesis warnings. We further establish a failure taxonomy encompassing 195 distinct real-world synthesis failures, revealing three performance tiers among models—with Gemini-3-Pro leading at 85.1 HQI—and uncover systematic differences in failure patterns between proprietary and open-source models. The evaluation demonstrates high consistency across three standard cell libraries (Spearman ρ > 0.99).

Technology Category

Application Category

📝 Abstract
RTL generation demands more than software code synthesis: designs must be syntactically valid, synthesizable, functionally correct, and hardware-efficient. Existing evaluations stop at functional correctness, leaving synthesizability and implementation quality unmeasured. We evaluate 32 language models on 202 Verilog tasks from VerilogEval and RTLLM, with five attempts each, scoring via the Hardware Quality Index (HQI), a 0--100 metric integrating post-synthesis area, delay, and warning count relative to expert references under a Nangate45 45\,nm flow. Three performance tiers emerge: 13 frontier models achieve Global HQI above 71, led by Gemini-3-Pro (87.5\% coverage, 85.1 HQI); 11 mid-tier models cluster at 53--68; 8 fall below 53. The capability-to-deployment gap (best-of-five vs.\ single-attempt) spans 3.8--22.1 HQI points, motivating multi-sample strategies. A tool-adjudicated taxonomy of 195 genuine synthesis failures reveals systematic divergence: proprietary models fail late through elaboration errors and synthesis timeout; open-weight models fail early through missing module wrappers and non-synthesizable constructs, consistent with training on simulation-grade rather than synthesis-grade RTL. Rankings hold across three technology libraries at Spearman~$ρ> 0.99$.
Problem

Research questions and friction points this paper is trying to address.

RTL generation
synthesizability
hardware quality
large language models
functional correctness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthesis-in-the-Loop
Hardware Quality Index (HQI)
RTL generation
LLM evaluation
synthesizability
🔎 Similar Papers
No similar papers found.