🤖 AI Summary
This work addresses the inefficiency and poor robustness of large language models (LLMs) in automatically generating algorithms for black-box optimization. To overcome these limitations, the authors propose a novel approach that integrates well-established benchmark algorithms as strong prior knowledge into prompt design. Furthermore, they employ token-level attribution analysis to uncover the critical role of high-quality examples in enhancing generation performance. The proposed method achieves significant improvements on two major black-box optimization benchmarks—PBO and BBOB—outperforming existing automated algorithm design techniques. These results demonstrate the effectiveness of synergistically combining prior-guided prompting with careful prompt engineering to boost the algorithmic reasoning capabilities of LLMs in optimization tasks.
📝 Abstract
Large Language Models (LLMs) have already been widely adopted for automated algorithm design, demonstrating strong abilities in generating and evolving algorithms across various fields. Existing work has largely focused on examining their effectiveness in solving specific problems, with search strategies primarily guided by adaptive prompt designs. In this paper, through investigating the token-wise attribution of the prompts to LLM-generated algorithmic codes, we show that providing high-quality algorithmic code examples can substantially improve the performance of the LLM-driven optimization. Building upon this insight, we propose leveraging prior benchmark algorithms to guide LLM-driven optimization and demonstrate superior performance on two black-box optimization benchmarks: the pseudo-Boolean optimization suite (pbo) and the black-box optimization suite (bbob). Our findings highlight the value of integrating benchmarking studies to enhance both efficiency and robustness of the LLM-driven black-box optimization methods.