LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manual insertion of performance-oriented pragmas in FPGA high-level synthesis (HLS) heavily relies on expert knowledge, hindering automation and scalability. Method: This paper proposes an LLM-GNN collaborative supervised fine-tuning framework that enables end-to-end automatic generation of optimized pragmas from C/C++ code. It jointly models control- and data-dependence graph structures with semantic textual features, and employs instruction-level fine-grained supervision to precisely localize optimization opportunities and generate hardware-constrained pragmas. Contribution/Results: Evaluated on multiple HLS benchmarks, our method achieves average speedups of 3.52×, 2.16×, and 66× over AutoDSE, HARP, and GPT-4o, respectively—significantly outperforming state-of-the-art automated design space exploration (DSE) approaches. To the best of our knowledge, this is the first pragma generation paradigm that deeply integrates structural awareness with linguistic understanding for HLS optimization.

Technology Category

Application Category

📝 Abstract
FPGAs are increasingly adopted in datacenter environments for their reconfigurability and energy efficiency. High-Level Synthesis (HLS) tools have eased FPGA programming by raising the abstraction level from RTL to untimed C/C++, yet attaining high performance still demands expert knowledge and iterative manual insertion of optimization pragmas to modify the microarchitecture. To address this challenge, we propose LIFT, a large language model (LLM)-based coding assistant for HLS that automatically generates performance-critical pragmas given a C/C++ design. We fine-tune the LLM by tightly integrating and supervising the training process with a graph neural network (GNN), combining the sequential modeling capabilities of LLMs with the structural and semantic understanding of GNNs necessary for reasoning over code and its control/data dependencies. On average, LIFT produces designs that improve performance by 3.52x and 2.16x than prior state-of the art AutoDSE and HARP respectively, and 66x than GPT-4o.
Problem

Research questions and friction points this paper is trying to address.

Automating pragma insertion for HLS performance optimization
Combining LLMs and GNNs for code structure understanding
Improving FPGA design efficiency over manual and existing methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based automatic pragma insertion for HLS
GNN-supervised fine-tuning for code understanding
Combines LLM sequential modeling with GNN structural analysis
🔎 Similar Papers
No similar papers found.