LLaMEA-SAGE: Guiding Automated Algorithm Design with Structural Feedback from Explainable AI

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel approach to automated algorithm design with large language models (LLMs) that overcomes the limitations of existing methods relying solely on performance feedback. By analyzing the abstract syntax trees of generated algorithms, the method extracts graph-theoretic and complexity-related structural features. These features are integrated with explainable AI techniques to identify key factors influencing algorithmic performance, which are then translated into natural-language mutation instructions to guide the LLM in efficiently evolving high-performing algorithms. This is the first framework to combine code structural characteristics with explainable AI to produce human-interpretable guidance signals, significantly enhancing evolutionary efficiency without sacrificing expressive power. Experimental results demonstrate that the approach achieves target performance faster on small-scale problems and outperforms state-of-the-art methods on the large-scale GECCO-MA-BBOB benchmark suite.

Technology Category

Application Category

📝 Abstract
Large language models have enabled automated algorithm design (AAD) by generating optimization algorithms directly from natural-language prompts. While evolutionary frameworks such as LLaMEA demonstrate strong exploratory capabilities across the algorithm design space, their search dynamics are entirely driven by fitness feedback, leaving substantial information about the generated code unused. We propose a mechanism for guiding AAD using feedback constructed from graph-theoretic and complexity features extracted from the abstract syntax trees of the generated algorithms, based on a surrogate model learned over an archive of evaluated solutions. Using explainable AI techniques, we identify features that substantially affect performance and translate them into natural-language mutation instructions that steer subsequent LLM-based code generation without restricting expressivity. We propose LLaMEA-SAGE, which integrates this feature-driven guidance into LLaMEA, and evaluate it across several benchmarks. We show that the proposed structured guidance achieves the same performance faster than vanilla LLaMEA in a small controlled experiment. In a larger-scale experiment using the MA-BBOB suite from the GECCO-MA-BBOB competition, our guided approach achieves superior performance compared to state-of-the-art AAD methods. These results demonstrate that signals derived from code can effectively bias LLM-driven algorithm evolution, bridging the gap between code structure and human-understandable performance feedback in automated algorithm design.
Problem

Research questions and friction points this paper is trying to address.

automated algorithm design
large language models
code structure
explainable AI
algorithm evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable AI
Automated Algorithm Design
Abstract Syntax Tree
Large Language Models
Structural Feedback
🔎 Similar Papers
No similar papers found.
N
N. V. Stein
LIACS, Leiden University, Netherlands
A
Anna V. Kononova
LIACS, Leiden University, Netherlands
Lars Kotthoff
Lars Kotthoff
University of St Andrews
combinatorial optimizationapplied machine learningalgorithm selectionautomated machine
T
Thomas H. W. Back
LIACS, Leiden University, Netherlands