🤖 AI Summary
To address the challenges of high-dimensional process parameter tuning and complex multi-objective trade-offs in integrated circuit design, this paper proposes an iterative optimization agent powered by large language models (LLMs). The method introduces a novel, modular, model-agnostic LLM tool-calling architecture that enables natural-language-driven, concurrent PPA (power, performance, area) optimization without LLM fine-tuning. By integrating tool invocation, incremental parameter exploration, and multi-objective optimization strategies, the approach achieves over 13% improvement in both routed wirelength and effective clock period across two technology nodes and multiple circuit benchmarks, while reducing iteration count by 40%. This significantly enhances automation and generalization capabilities within open-source chip design flows.
📝 Abstract
Machine learning has been widely used to optimize complex engineering workflows across numerous domains. In the context of integrated circuit design, modern flows (e.g., going from a register-transfer level netlist to physical layouts) involve extensive configuration via thousands of parameters, and small changes to these parameters can have large downstream impacts on desired outcomes - namely design performance, power, and area. Recent advances in Large Language Models (LLMs) offer new opportunities for learning and reasoning within such high-dimensional optimization tasks. In this work, we introduce ORFS-agent, an LLM-based iterative optimization agent that automates parameter tuning in an open-source hardware design flow. ORFS-agent adaptively explores parameter configurations, demonstrating clear improvements over standard Bayesian optimization approaches in terms of resource efficiency and final design metrics. Our empirical evaluations on two different technology nodes and a range of circuit benchmarks indicate that ORFS-agent can improve both routed wirelength and effective clock period by over 13%, all while using 40% fewer optimization iterations. Moreover, by following natural language objectives to trade off certain metrics for others, ORFS-agent demonstrates a flexible and interpretable framework for multi-objective optimization. Crucially, RFS-agent is modular and model-agnostic, and can be plugged in to any frontier LLM without any further fine-tuning.