LLM-DSE: Searching Accelerator Parameters with LLM Agents

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor adaptability and low sample efficiency in hardware instruction parameter optimization for high-level synthesis (HLS), this paper proposes the first large language model (LLM)-driven multi-agent collaborative framework. The framework comprises four specialized agent roles—Router, Specialist, Arbitrator, and Critic—that jointly enable online, interactive knowledge evolution via a “verbal learning” mechanism. It integrates tool-augmented LLMs with design space exploration (DSE) to balance generalization capability and sample efficiency. Evaluated on the HLSyn dataset, the method achieves a 2.55× performance improvement over baselines, discovers novel, highly efficient domain-specific accelerator (DSA) architectures, and significantly reduces optimization latency. Ablation studies confirm the critical contributions of role specialization and dynamic inter-agent interaction to overall efficacy.

Technology Category

Application Category

📝 Abstract
Even though high-level synthesis (HLS) tools mitigate the challenges of programming domain-specific accelerators (DSAs) by raising the abstraction level, optimizing hardware directive parameters remains a significant hurdle. Existing heuristic and learning-based methods struggle with adaptability and sample efficiency.We present LLM-DSE, a multi-agent framework designed specifically for optimizing HLS directives. Combining LLM with design space exploration (DSE), our explorer coordinates four agents: Router, Specialists, Arbitrator, and Critic. These multi-agent components interact with various tools to accelerate the optimization process. LLM-DSE leverages essential domain knowledge to identify efficient parameter combinations while maintaining adaptability through verbal learning from online interactions. Evaluations on the HLSyn dataset demonstrate that LLM-DSE achieves substantial $2.55 imes$ performance gains over state-of-the-art methods, uncovering novel designs while reducing runtime. Ablation studies validate the effectiveness and necessity of the proposed agent interactions. Our code is open-sourced here: https://github.com/Nozidoali/LLM-DSE.
Problem

Research questions and friction points this paper is trying to address.

Optimizing hardware directive parameters for domain-specific accelerators
Improving adaptability and sample efficiency in design space exploration
Achieving performance gains over state-of-the-art methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework optimizes HLS directives
LLM combines with design space exploration
Agents interact to accelerate optimization process