🤖 AI Summary
Large language models (LLMs) face two key bottlenecks in automated heuristic design (AHD) for evolutionary computation: over-reliance on static operators and inability to accumulate domain-specific knowledge across optimization runs.
Method: We propose a feedforward–backtracking dual-phase collaborative prompting framework. It leverages population dynamics analysis to drive adaptive prompt engineering and integrates an experience replay mechanism that distills historically successful strategies into transferable, general-purpose heuristic principles—enabling continuous self-improvement of the LLM during search.
Contribution/Results: The resulting knowledge accumulation system effectively balances exploration and exploitation, overcoming limitations of fixed operators and catastrophic forgetting. Empirical evaluation demonstrates that our approach generates superior heuristics with significantly fewer LLM queries—achieving faster convergence and up to several-fold improvement in query efficiency compared to baseline methods.
📝 Abstract
LLM-based Automatic Heuristic Design (AHD) within Evolutionary Computation (EC) frameworks has shown promising results. However, its effectiveness is hindered by the use of static operators and the lack of knowledge accumulation mechanisms. We introduce HiFo-Prompt, a framework that guides LLMs with two synergistic prompting strategies: Foresight and Hindsight. Foresight-based prompts adaptively steer the search based on population dynamics, managing the exploration-exploitation trade-off. In addition, hindsight-based prompts mimic human expertise by distilling successful heuristics from past generations into fundamental, reusable design principles. This dual mechanism transforms transient discoveries into a persistent knowledge base, enabling the LLM to learn from its own experience. Empirical results demonstrate that HiFo-Prompt significantly outperforms state-of-the-art LLM-based AHD methods, generating higher-quality heuristics while achieving substantially faster convergence and superior query efficiency.