🤖 AI Summary
This paper addresses the limited robustness of conventional heuristic algorithms by proposing a novel design paradigm leveraging large language models (LLMs). Methodologically, it integrates failure-case root-cause analysis, input-space regional specialization modeling, and iterative feedback-driven optimization—enabling the LLM not only to diagnose performance bottlenecks but also to synthesize domain-specific heuristics tailored to distinct input regions. The key contribution lies in elevating the LLM from a prompt-engineering tool to an interpretable, domain-aware algorithmic co-designer. Experimental results demonstrate that the LLM-generated heuristics achieve approximately 28× better worst-case performance than FunSearch, while also improving average-case performance and preserving computational efficiency. Consequently, the approach significantly enhances both robustness and generalization across diverse input distributions.
📝 Abstract
We posit that we can generate more robust and performant heuristics if we augment approaches using LLMs for heuristic design with tools that explain why heuristics underperform and suggestions about how to fix them. We find even simple ideas that (1) expose the LLM to instances where the heuristic underperforms; (2) explain why they occur; and (3) specialize design to regions in the input space, can produce more robust algorithms compared to existing techniques~ -- ~the heuristics we produce have a $sim28 imes$ better worst-case performance compared to FunSearch, improve average performance, and maintain the runtime.