π€ AI Summary
This work addresses the low efficiency of manual pragma insertion and the difficulty of design-space exploration in high-level synthesis (HLS). We propose the first closed-loop intelligent agent framework for HLS powered by inference-oriented large language models (LLMs). Methodologically, it integrates Chain-of-Thought (CoT) prompting, feedback from the HLS toolchain (Vivado HLS), and an integer linear programming (ILP) solver to enable automated code refactoring, pragma generation, and co-optimization. Key contributions include: (i) the first application of CoT reasoning to HLS optimization; (ii) a feedback-driven, agent-based multi-step decision architecture; and (iii) empirical evidence of interpretable, stepwise reasoning traces in open-source inference LLMs (e.g., DeepSeek-R1) for hardware design tasks. Experiments across multiple benchmarks demonstrate significant improvements in optimization success rate and efficiency, achieving superior areaβlatency trade-offs. This work provides the first empirically validated AI-for-EDA framework grounded in LLM-based reasoning.
π Abstract
Recent Large Language Models (LLMs) such as OpenAI o3-mini and DeepSeek-R1 use enhanced reasoning through Chain-of-Thought (CoT). Their potential in hardware design, which relies on expert-driven iterative optimization, remains unexplored. This paper investigates whether reasoning LLMs can address challenges in High-Level Synthesis (HLS) design space exploration and optimization. During HLS, engineers manually define pragmas/directives to balance performance and resource constraints. We propose an LLM-based optimization agentic framework that automatically restructures code, inserts pragmas, and identifies optimal design points via feedback from HLs tools and access to integer-linear programming (ILP) solvers. Experiments compare reasoning models against conventional LLMs on benchmarks using success rate, efficiency, and design quality (area/latency) metrics, and provide the first-ever glimpse into the CoTs produced by a powerful open-source reasoning model like DeepSeek-R1.