Faver: Boosting LLM-based RTL Generation with Function Abstracted Verifiable Middleware

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLMs face three core challenges in generating RTL code: (1) a substantial semantic gap between high-level specifications and RTL, (2) scarcity of high-quality training data, and (3) difficulty modeling fundamental hardware-software discrepancies in temporal/spatial granularity and low-level implementation details. To address these, we propose Faver—a functionally abstracted, verifiable intermediate representation—uniquely integrating LLM-friendly structured code, rule-based templates, and function-level abstraction. Faver decouples functional implementation from circuit verification logic, enabling LLMs to focus exclusively on semantic-to-behavioral mapping. By bridging high-level semantics and low-level hardware structure, Faver significantly reduces the complexity of few-shot RTL generation and formal verification. Experimental evaluation—including supervised fine-tuning (SFT) and testing across multiple open-source LLMs—demonstrates up to a 14% absolute improvement in RTL generation accuracy.

Technology Category

Application Category

📝 Abstract
LLM-based RTL generation is an interesting research direction, as it holds the potential to liberate the least automated stage in the current chip design. However, due to the substantial semantic gap between high-level specifications and RTL, coupled with limited training data, existing models struggle with generation accuracy. Drawing on human experience, design with verification helps improving accuracy. However, as the RTL testbench data are even more scarce, it is not friendly for LLMs. Although LLMs excel at higher-level languages like Python/C, they have a huge semantic gap from RTL. When implementing the same functionality, Python/C code and hardware code differ significantly in the spatiotemporal granularity, requiring the LLM not only to consider high-level functional semantics but also to ensure the low-level details align with the circuit code. It is not an easy task. In this paper, we propose a function abstracted verifiable middleware (Faver) that streamlines RTL verification in LLM-based workflows. By mixing LLM-friendly code structures with a rule-based template, Faver decouples the details of circuit verification, allowing the LLM to focus on the functionality itself. In our experiments on the SFT model and open-source models, Faver improved the model's generation accuracy by up to 14%.
Problem

Research questions and friction points this paper is trying to address.

Bridging semantic gap between high-level specifications and RTL generation
Addressing limited training data for RTL testbench verification
Improving LLM-based hardware generation accuracy through abstracted middleware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Function abstracted verifiable middleware streamlines RTL verification
Mixes LLM-friendly code with rule-based template
Decouples circuit verification details from functionality generation
🔎 Similar Papers
No similar papers found.
Jianan Mu
Jianan Mu
Institute of Computing Technology, State Key Laboratory of Processors (SKLP), CAS
Design AutomationAccelaretorPrivacy Preserving Computing
M
Mingyu Shi
Nanjing University, School of Integrated Circuits
Y
Yining Wang
University of Toronto
Tianmeng Yang
Tianmeng Yang
Baidu ERNIE, Peking University
LLMRLMachine LearningData Mining
B
Bin Sun
the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
X
Xing Hu
the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
J
Jing Ye
the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
Huawei Li
Huawei Li
Institute of Computing Technology, Chinese Academy of Sciences
computer engineering