🤖 AI Summary
Current automated algorithm design over-relies on LLM-based performance optimization, lacking interpretable analysis of algorithmic effectiveness—particularly regarding mechanistic principles, critical components, and structural alignment with problem characteristics.
Method: We propose “Explainable Automated Algorithm Design,” a novel paradigm built upon three pillars: (1) LLM-driven algorithm discovery, (2) problem-class–aware explainable benchmarking, and (3) problem-class descriptors grounded in landscape-structural analysis. A closed-loop iterative workflow explicitly establishes semantic mappings between algorithmic components and problem structures, shifting from black-box search to structure-aware generation.
Contribution/Results: This work is the first to embed interpretability throughout the entire algorithm design pipeline. It yields reusable scientific insights—uncovering the structural conditions and causal mechanisms underlying heuristic efficacy—and advances algorithm design from a performance-centric paradigm toward a unified knowledge loop integrating understanding, discovery, and generalization.
📝 Abstract
Automated algorithm design is entering a new phase: Large Language Models can now generate full optimisation (meta)heuristics, explore vast design spaces and adapt through iterative feedback. Yet this rapid progress is largely performance-driven and opaque. Current LLM-based approaches rarely reveal why a generated algorithm works, which components matter or how design choices relate to underlying problem structures. This paper argues that the next breakthrough will come not from more automation, but from coupling automation with understanding from systematic benchmarking. We outline a vision for explainable automated algorithm design, built on three pillars: (i) LLM-driven discovery of algorithmic variants, (ii) explainable benchmarking that attributes performance to components and hyperparameters and (iii) problem-class descriptors that connect algorithm behaviour to landscape structure. Together, these elements form a closed knowledge loop in which discovery, explanation and generalisation reinforce each other. We argue that this integration will shift the field from blind search to interpretable, class-specific algorithm design, accelerating progress while producing reusable scientific insight into when and why optimisation strategies succeed.