The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and limited support for early-stage design iteration in HDL-level circuit quality assessment during logic synthesis, this paper proposes a multimodal quality estimation algorithm integrating large language models (LLMs) and graph neural networks (GNNs). Our method innovatively incorporates GNN embeddings derived from LUT-level circuit graphs as a structural regularization term to jointly guide the LLM in learning both high-level semantic features and low-level topological characteristics. We employ program-aware LLM fine-tuning, a lightweight regression head, and end-to-end multimodal joint training. Evaluated on the OpenABCD benchmark, our approach achieves millisecond-scale HDL quality prediction with a 22.6% reduction in average error compared to state-of-the-art RTL-level graph-based methods. To the best of our knowledge, this is the first technique enabling efficient and accurate quality feedback at the earliest stages of hardware design.

Technology Category

Application Category

📝 Abstract
Logic synthesis is a crucial phase in the circuit design process, responsible for transforming hardware description language (HDL) designs into optimized netlists. However, traditional logic synthesis methods are computationally intensive, restricting their iterative use in refining chip designs. Recent advancements in large language models (LLMs), particularly those fine-tuned on programming languages, present a promising alternative. This work proposes augmenting LLMs with predictor networks trained to estimate circuit quality directly from HDL code. To enhance performance, the model is regularized using embeddings from graph neural networks (GNNs) trained on Look-Up Table (LUT) graphs, thereby incorporating lower-level circuit insights. The proposed method demonstrates superior performance compared to existing graph-based RTL-level estimation techniques on the established benchmark OpenABCD, while providing instant feedback on HDL code quality.
Problem

Research questions and friction points this paper is trying to address.

Enhances circuit design with LLMs.
Estimates circuit quality from HDL.
Integrates GNNs for lower-level insights.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs augmented with predictor networks
GNN embeddings for regularization
Direct HDL code quality estimation