Automated Design Optimization via Strategic Search with Large Language Models

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For optimization problems with design spaces that resist formal parameterization, this paper proposes the AUTO framework. It models design optimization as a strategy-guided, gradient-free search process and introduces a novel Strategist-Implementor dual-agent architecture to enable dynamic coordination between exploration and exploitation, along with adaptive search control. The framework integrates large language models’ semantic understanding and code generation capabilities with a reinforcement learning–inspired policy control mechanism, supporting fine-grained, code-level design optimization. Evaluated on chemical kinetics integration and dense matrix multiplication, AUTO achieves expert-level performance. Its search efficiency reaches 50%–70% of Bayesian optimization’s, while incurring only $159 per run and completing within approximately eight hours. Crucially, AUTO eliminates reliance on explicit parametric definitions—overcoming a fundamental limitation of conventional optimization methods.

Technology Category

Application Category

📝 Abstract
Traditional optimization methods excel in well-defined search spaces but struggle with design problems where transformations and design parameters are difficult to define. Large language models (LLMs) offer a promising alternative by dynamically interpreting design spaces and leveraging encoded domain knowledge. To this end, we introduce AUTO, an LLM agent framework that treats design optimization as a gradient-free search problem guided by strategic LLM reasoning. The framework employs two collaborative agents: a Strategist that selects between exploration and exploitation strategies, and an Implementor that executes detailed designs. Applied to GPU code optimization -- a domain critical to fields from machine learning to scientific computing -- AUTO generates solutions competitive with expert implementations for chemical kinetics integration and dense matrix multiplication. The framework achieves 50-70% search efficiency relative to Bayesian optimization methodologies. It completes optimizations in approximately 8 hours at an estimated cost of up to $159 per run, compared to an estimated cost of up to $480 with median-wage software developers. These findings open the door to automating design optimization in ill-defined search spaces with limited prior information.
Problem

Research questions and friction points this paper is trying to address.

Automates design optimization in ill-defined search spaces
Replaces traditional methods using LLM-guided strategic search
Applies framework to GPU code optimization competitively
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agent framework for gradient-free design optimization
Two-agent system: Strategist and Implementor for search
Applied to GPU code optimization with high efficiency
🔎 Similar Papers
No similar papers found.