NotSoTiny: A Large, Living Benchmark for RTL Code Generation

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RTL generation benchmarks suffer from limited scale, simplistic designs, inadequate verification, and data contamination. To address these issues, we propose the first large-scale, dynamically updated “living benchmark” for RTL generation, grounded in real hardware designs from Tiny Tapeout and supporting rich structural diversity and context-aware evaluation. Our method introduces automated RTL extraction and cleaning, formal equivalence checking (using Yosys+ABC), coverage-driven testbench generation, and a timing-aware verification framework. The benchmark automatically deduplicates and updates in alignment with hardware community iterations, effectively mitigating data contamination and evaluation bias. Experimental results show that state-of-the-art LLMs achieve less than 28% functional correctness on complex RTL generation tasks under this benchmark—significantly increasing evaluation rigor. This work establishes the first reliable, reproducible, and hardware-grounded evaluation standard for AI-driven RTL synthesis.

Technology Category

Application Category

📝 Abstract
LLMs have shown early promise in generating RTL code, yet evaluating their capabilities in realistic setups remains a challenge. So far, RTL benchmarks have been limited in scale, skewed toward trivial designs, offering minimal verification rigor, and remaining vulnerable to data contamination. To overcome these limitations and to push the field forward, this paper introduces NotSoTiny, a benchmark that assesses LLM on the generation of structurally rich and context-aware RTL. Built from hundreds of actual hardware designs produced by the Tiny Tapeout community, our automated pipeline removes duplicates, verifies correctness and periodically incorporates new designs to mitigate contamination, matching Tiny Tapeout release schedule. Evaluation results show that NotSoTiny tasks are more challenging than prior benchmarks, emphasizing its effectiveness in overcoming current limitations of LLMs applied to hardware design, and in guiding the improvement of such promising technology.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' RTL code generation in realistic hardware design scenarios
Addressing limitations of small-scale, trivial benchmarks with minimal verification
Mitigating data contamination issues in RTL generation benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline for RTL benchmark creation
Incorporates real hardware designs to ensure realism
Periodic updates to mitigate data contamination
🔎 Similar Papers
No similar papers found.
R
Razine Moundir Ghorab
Barcelona Supercomputing Center
E
Emanuele Parisi
Barcelona Supercomputing Center
C
Cristian Gutierrez-Gomez
Barcelona Supercomputing Center
M
Miquel Alberti-Binimelis
Barcelona Supercomputing Center
Miquel Moreto
Miquel Moreto
Associate Professor at UPC and Group Leader at BSC
Computer ArchitectureHPC
D
Dario Garcia-Gasulla
Barcelona Supercomputing Center
Gokcen Kestor
Gokcen Kestor
Pacific Northwest National Laboratory; University of California, Merced
Programming models for high performance computingruntime and compiler