VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based RTL generation methods prioritize syntactic correctness while neglecting functional verification, leading to semantically incorrect hardware descriptions. To address this, we propose the first functionally correct RTL generation framework. Our approach comprises three key components: (1) constructing a large-scale, function-level verified RTL dataset containing over 125,000 samples—all semantically correct and fully synthesizable and simulatable; (2) introducing a novel paradigm integrating automated unit test generation with simulation-driven iterative refinement, using GPT-4o-mini as a teacher model and coupling Icarus Verilog simulation with natural language–hardware behavioral alignment in a closed-loop training process; and (3) achieving state-of-the-art functional correctness on VerilogEval and RTLLM benchmarks—outperforming prior work by +71.7% and +27.4%, respectively—with ablation studies confirming that functionally verified data is the primary driver of performance gains.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have sparked growing interest in applying them to Electronic Design Automation (EDA) tasks, particularly Register Transfer Level (RTL) code generation. While several RTL datasets have been introduced, most focus on syntactic validity rather than functional validation with tests, leading to training examples that compile but may not implement the intended behavior. We present VERICODER, a model for RTL code generation fine-tuned on a dataset validated for functional correctness. This fine-tuning dataset is constructed using a novel methodology that combines unit test generation with feedback-directed refinement. Given a natural language specification and an initial RTL design, we prompt a teacher model (GPT-4o-mini) to generate unit tests and iteratively revise the RTL design based on its simulation results using the generated tests. If necessary, the teacher model also updates the tests to ensure they comply with the natural language specification. As a result of this process, every example in our dataset is functionally validated, consisting of a natural language description, an RTL implementation, and passing tests. Fine-tuned on this dataset of over 125,000 examples, VERICODER achieves state-of-the-art metrics in functional correctness on VerilogEval and RTLLM, with relative gains of up to 71.7% and 27.4% respectively. An ablation study further shows that models trained on our functionally validated dataset outperform those trained on functionally non-validated datasets, underscoring the importance of high-quality datasets in RTL code generation.
Problem

Research questions and friction points this paper is trying to address.

Ensuring functional correctness in LLM-generated RTL code
Addressing lack of validated datasets for RTL training
Improving RTL code accuracy via test-driven refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned LLM with functional correctness validation
Unit test generation with feedback-directed refinement
Dataset validated via iterative RTL and test updates
🔎 Similar Papers
No similar papers found.