🤖 AI Summary
This work addresses the limitations of existing large language model (LLM) evaluation benchmarks for chip design, which suffer from task simplicity and performance saturation that fail to reflect real-world industrial complexity. We propose the first comprehensive benchmark tailored for AI-assisted chip design, encompassing three core tasks: Verilog code generation, debugging, and cross-language reference model generation. The benchmark includes 44 complex hardware modules, 89 debugging scenarios, and 132 cross-language reference samples spanning Python, SystemC, and CXXRTL. To support realistic evaluation, we establish an industry-aligned assessment framework and develop an automated toolchain for high-quality training data generation. Experimental results reveal that even state-of-the-art models such as Claude-4.5-Opus achieve only 30.74% and 13.33% accuracy on critical tasks—far below the >95% reported on existing benchmarks—highlighting significant capability gaps in applying LLMs to practical chip design workflows.
📝 Abstract
While Large Language Models (LLMs) show significant potential in hardware engineering, current benchmarks suffer from saturation and limited task diversity, failing to reflect LLMs'performance in real industrial workflows. To address this gap, we propose a comprehensive benchmark for AI-aided chip design that rigorously evaluates LLMs across three critical tasks: Verilog generation, debugging, and reference model generation. Our benchmark features 44 realistic modules with complex hierarchical structures, 89 systematic debugging cases, and 132 reference model samples across Python, SystemC, and CXXRTL. Evaluation results reveal substantial performance gaps, with state-of-the-art Claude-4.5-opus achieving only 30.74\% on Verilog generation and 13.33\% on Python reference model generation, demonstrating significant challenges compared to existing saturated benchmarks where SOTA models achieve over 95\% pass rates. Additionally, to help enhance LLM reference model generation, we provide an automated toolbox for high-quality training data generation, facilitating future research in this underexplored domain. Our code is available at https://github.com/zhongkaiyu/ChipBench.git.