Paradigm-Based Automatic HDL Code Generation Using LLMs

📅 2025-01-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate Verilog code with hallucinations and high functional error rates. Method: This paper proposes a paradigm-block-driven, two-stage, multi-round generation framework. First, circuits are classified by type to retrieve human-expert-designed paradigm blocks—comprising information extraction, human-like design workflows, and EDA tool integration. Then, a closed-loop iterative process (“generate → simulate → feedback → correct”) refines the code within a bounded number of rounds. Contributions/Results: (1) It introduces the first LLM instruction framework guided by hardware design paradigm blocks; (2) it establishes a verifiable two-stage generation-and-verification mechanism. Experiments demonstrate a significant improvement in testbench pass rate, effectively mitigating semantic distortion and structural hallucination in LLM-generated HDL code.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) have demonstrated the ability to generate hardware description language (HDL) code for digital circuits, they still face the hallucination problem, which can result in the generation of incorrect HDL code or misinterpretation of specifications. In this work, we introduce a human-expert-inspired method to mitigate the hallucination of LLMs and enhance their performance in HDL code generation. We begin by constructing specialized paradigm blocks that consist of several steps designed to divide and conquer generation tasks, mirroring the design methodology of human experts. These steps include information extraction, human-like design flows, and the integration of external tools. LLMs are then instructed to classify the type of circuit in order to match it with the appropriate paradigm block and execute the block to generate the HDL codes. Additionally, we propose a two-phase workflow for multi-round generation, aimed at effectively improving the testbench pass rate of the generated HDL codes within a limited number of generation and verification rounds. Experimental results demonstrate that our method significantly enhances the functional correctness of the generated Verilog code
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Hardware Description Language
Accuracy Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expert-guided Approach
Modular Methodology
Two-stage Process Optimization
🔎 Similar Papers
No similar papers found.