Behaviour Space Analysis of LLM-driven Meta-heuristic Discovery

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the behavioral space evolution mechanism underlying large language models’ (LLMs) autonomous generation of metaheuristic algorithms, aiming to explain the intrinsic causes of performance differences across prompting strategies. We propose a multidimensional behavioral-space analysis framework integrating code evolution graphs, search trajectory networks, static code features, and convergence dynamics modeling. Leveraging the LLaMEA framework and GPT-4-mini, we implement black-box algorithm generation coupled with 1+1 elitist evolutionary optimization. Innovatively, we introduce behavioral projection and structure-dynamics coupling analysis into LLM-driven algorithm discovery. Results show that a mutation prompting strategy combining code simplification and random perturbation significantly enhances exploitation capability and convergence speed, achieving optimal Area-Under-Curve (AOC) and the lowest stagnation rate. This work is the first to systematically characterize LLMs’ exploration patterns in open-ended algorithm spaces and empirically validates the critical role of behavior-oriented analysis in interpretable algorithm design.

Technology Category

Application Category

📝 Abstract
We investigate the behaviour space of meta-heuristic optimisation algorithms automatically generated by Large Language Model driven algorithm discovery methods. Using the Large Language Evolutionary Algorithm (LLaMEA) framework with a GPT o4-mini LLM, we iteratively evolve black-box optimisation heuristics, evaluated on 10 functions from the BBOB benchmark suite. Six LLaMEA variants, featuring different mutation prompt strategies, are compared and analysed. We log dynamic behavioural metrics including exploration, exploitation, convergence and stagnation measures, for each run, and analyse these via visual projections and network-based representations. Our analysis combines behaviour-based projections, Code Evolution Graphs built from static code features, performance convergence curves, and behaviour-based Search Trajectory Networks. The results reveal clear differences in search dynamics and algorithm structures across LLaMEA configurations. Notably, the variant that employs both a code simplification prompt and a random perturbation prompt in a 1+1 elitist evolution strategy, achieved the best performance, with the highest Area Over the Convergence Curve. Behaviour-space visualisations show that higher-performing algorithms exhibit more intensive exploitation behaviour and faster convergence with less stagnation. Our findings demonstrate how behaviour-space analysis can explain why certain LLM-designed heuristics outperform others and how LLM-driven algorithm discovery navigates the open-ended and complex search space of algorithms. These findings provide insights to guide the future design of adaptive LLM-driven algorithm generators.
Problem

Research questions and friction points this paper is trying to address.

Analyze behavior space of LLM-generated meta-heuristic algorithms
Compare performance of six LLaMEA variants on BBOB functions
Explain why certain LLM-designed heuristics outperform others
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven meta-heuristic algorithm discovery
Behavior-space analysis with dynamic metrics
Code simplification and perturbation prompts
🔎 Similar Papers
No similar papers found.