VeriMoA: A Mixture-of-Agents Framework for Spec-to-HDL Generation

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of domain-knowledge scarcity, limited reasoning space, and severe noise propagation in multi-agent collaboration for RTL-level hardware generation using large language models (LLMs), this paper proposes a training-free multi-agent framework. Our method introduces: (1) a quality-guided caching mechanism to accumulate and reuse cross-stage knowledge; (2) a multi-path generation strategy to enhance diversity in solution-space exploration; and (3) a collaborative translation and quality-ranked selection mechanism leveraging C++/Python intermediate representations. Evaluated on VerilogEval 2.0 and RTLLM 2.0 benchmarks, our approach achieves a 15–30% improvement in Pass@1, substantially narrowing the performance gap between small LLMs and larger or fine-tuned models. This work establishes a novel paradigm for trustworthy, resource-efficient RTL generation in low-resource settings.

Technology Category

Application Category

📝 Abstract
Automation of Register Transfer Level (RTL) design can help developers meet increasing computational demands. Large Language Models (LLMs) show promise for Hardware Description Language (HDL) generation, but face challenges due to limited parametric knowledge and domain-specific constraints. While prompt engineering and fine-tuning have limitations in knowledge coverage and training costs, multi-agent architectures offer a training-free paradigm to enhance reasoning through collaborative generation. However, current multi-agent approaches suffer from two critical deficiencies: susceptibility to noise propagation and constrained reasoning space exploration. We propose VeriMoA, a training-free mixture-of-agents (MoA) framework with two synergistic innovations. First, a quality-guided caching mechanism to maintain all intermediate HDL outputs and enables quality-based ranking and selection across the entire generation process, encouraging knowledge accumulation over layers of reasoning. Second, a multi-path generation strategy that leverages C++ and Python as intermediate representations, decomposing specification-to-HDL translation into two-stage processes that exploit LLM fluency in high-resource languages while promoting solution diversity. Comprehensive experiments on VerilogEval 2.0 and RTLLM 2.0 benchmarks demonstrate that VeriMoA achieves 15--30% improvements in Pass@1 across diverse LLM backbones, especially enabling smaller models to match larger models and fine-tuned alternatives without requiring costly training.
Problem

Research questions and friction points this paper is trying to address.

Automating Register Transfer Level design using Large Language Models
Overcoming noise propagation in multi-agent hardware generation systems
Enhancing reasoning space exploration for Hardware Description Language generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quality-guided caching mechanism for HDL ranking
Multi-path generation using C++ and Python intermediates
Training-free mixture-of-agents framework for HDL generation
🔎 Similar Papers
No similar papers found.