ACE-RTL: When Agentic Context Evolution Meets RTL-Specialized LLMs

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing hardware design automation approaches for generating accurate RTL code either rely on domain-specific models or employ general-purpose large language model agents, struggling to balance precision and generalization. This work proposes the Agentic Context Evolution (ACE) framework, which unifies a specialized RTL large language model—trained on 1.7 million samples—with state-of-the-art reasoning models through a synergistic triad of generator, reflector, and coordinator modules that iteratively refine outputs. ACE further incorporates a parallel expansion strategy to substantially reduce the number of required iterations. Evaluated on the CVDP benchmark, ACE achieves an average 44.87% improvement in pass rate over 14 strong baselines and produces correct RTL implementations in as few as four iterations.

Technology Category

Application Category

📝 Abstract
Recent advances in large language models (LLMs) have sparked growing interest in applying them to hardware design automation, particularly for accurate RTL code generation. Prior efforts follow two largely independent paths: (i) training domain-adapted RTL models to internalize hardware semantics, (ii) developing agentic systems that leverage frontier generic LLMs guided by simulation feedback. However, these two paths exhibit complementary strengths and weaknesses. In this work, we present ACE-RTL that unifies both directions through Agentic Context Evolution (ACE). ACE-RTL integrates an RTL-specialized LLM, trained on a large-scale dataset of 1.7 million RTL samples, with a frontier reasoning LLM through three synergistic components: the generator, reflector, and coordinator. These components iteratively refine RTL code toward functional correctness. We further introduce a parallel scaling strategy that significantly reduces the number of iterations required to reach correct solutions. On the Comprehensive Verilog Design Problems (CVDP) benchmark, ACE-RTL achieves up to a 44.87% pass rate improvement over 14 competitive baselines while requiring only four iterations on average.
Problem

Research questions and friction points this paper is trying to address.

RTL code generation
hardware design automation
large language models
agentic systems
domain-adapted models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic Context Evolution
RTL-specialized LLM
hardware design automation
iterative refinement
parallel scaling
🔎 Similar Papers
No similar papers found.