🤖 AI Summary
This work addresses the challenge of multi-objective optimization—balancing timing, power, and area—in modern chip design, which traditionally relies heavily on expert intuition and struggles to achieve high-quality results (QoR) within tight design cycles. We propose the first LLM-based agent framework deeply integrated with EDA tools, leveraging retrieval-augmented generation to construct a natural language–driven optimization search tree. A novel language-based reflection mechanism, guided by Pareto-optimal QoR feedback, enables iterative refinement. By uniquely combining retrieval-augmented generation, linguistic reflection, and multi-objective feedback, our approach supports customizable, natural language–specified optimization schedules while drastically reducing manual intervention. Experimental results demonstrate that, compared to black-box methods such as reinforcement learning, our method achieves 10% better timing, lower power and area, over 4× faster convergence, and QoR comparable to that of human experts.
📝 Abstract
Modern chip design requires multi-objective optimization of timing, power, and area under stringent time-to-market constraints. Although powerful optimization algorithms are integrated into EDA tools, achieving high QoR hinges on effective long-horizon scheduling, which relies heavily on manual expert intervention. To address this issue and automate chip design, we propose an agentic LLM framework that schedules chip optimizations through direct interaction with EDA tools. The agent is grounded in natural language expertise expressed as a search tree through retrieval-augmented generation (RAG). We further improve scheduling quality with Pareto-driven QoR feedback through language reflection. Experimental results show that, compared with black-box search methods such as reinforcement learning, our framework achieves 10% greater timing improvement while consuming less power and area, with more than 4x speedup. The post-optimization QoR is also comparable to that achieved by human experts. Finally, the agent supports customized tasks expressed in natural language, enabling preferential QoR trade-offs. The code and chip design data will be publicly available at https://github.com/YiKangOY/Open-LLM-ECO.