AutoEP: LLMs-Driven Automation of Hyperparameter Evolution for Metaheuristic Algorithms

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic configuration of metaheuristic hyperparameters suffers from high sample complexity and poor generalization. This paper proposes AutoEP, the first framework to leverage large language models (LLMs) in a zero-shot manner for hyperparameter evolution control. AutoEP integrates online exploratory landscape analysis (ELA) with a multi-LLM collaborative reasoning chain to perform real-time causal inference and adaptive strategy generation—eliminating reliance on offline training or costly hyperparameter sampling. Without fine-tuning, it enables open-source LLMs (e.g., Qwen3-30B) to achieve regulation performance on par with GPT-4. Evaluated across diverse combinatorial optimization benchmarks, AutoEP significantly outperforms state-of-the-art approaches—including neural evolution—in both convergence speed and solution quality. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Dynamically configuring algorithm hyperparameters is a fundamental challenge in computational intelligence. While learning-based methods offer automation, they suffer from prohibitive sample complexity and poor generalization. We introduce AutoEP, a novel framework that bypasses training entirely by leveraging Large Language Models (LLMs) as zero-shot reasoning engines for algorithm control. AutoEP's core innovation lies in a tight synergy between two components: (1) an online Exploratory Landscape Analysis (ELA) module that provides real-time, quantitative feedback on the search dynamics, and (2) a multi-LLM reasoning chain that interprets this feedback to generate adaptive hyperparameter strategies. This approach grounds high-level reasoning in empirical data, mitigating hallucination. Evaluated on three distinct metaheuristics across diverse combinatorial optimization benchmarks, AutoEP consistently outperforms state-of-the-art tuners, including neural evolution and other LLM-based methods. Notably, our framework enables open-source models like Qwen3-30B to match the performance of GPT-4, demonstrating a powerful and accessible new paradigm for automated hyperparameter design. Our code is available at https://anonymous.4open.science/r/AutoEP-3E11
Problem

Research questions and friction points this paper is trying to address.

Automating hyperparameter configuration for metaheuristic algorithms
Overcoming sample complexity limitations in learning-based tuning methods
Leveraging LLMs for zero-shot reasoning in algorithm control
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as zero-shot reasoning engines for algorithm control
Online Exploratory Landscape Analysis for real-time feedback
Multi-LLM reasoning chain interpreting feedback for adaptive strategies
🔎 Similar Papers
No similar papers found.
Z
Zhenxing Xu
College of Systems Engineering, National University of Defense Technology
Y
Yizhe Zhang
College of Systems Engineering, National University of Defense Technology
W
Weidong Bao
College of Systems Engineering, National University of Defense Technology
H
Hao Wang
College of Systems Engineering, National University of Defense Technology
M
Ming Chen
College of Systems Engineering, National University of Defense Technology
Haoran Ye
Haoran Ye
AI PhD @ Peking University
AgentAI Safety and AlignmentAI PsychologyLearn to OptimizeEvolutionary Computation
W
Wenzheng Jiang
College of Systems Engineering, National University of Defense Technology
H
Hui Yan
College of Systems Engineering, National University of Defense Technology
J
Ji Wang
College of Systems Engineering, National University of Defense Technology