π€ AI Summary
Existing evolutionary algorithms for multi-objective combinatorial optimization problems (MOCOPs) suffer from heavy reliance on manual parameter tuning and poor generalizability, while current LLM-based approaches neglect Pareto balance and computational efficiency. To address these limitations, we propose ParetoGrid-LLM: a large language modelβguided heuristic auto-design method leveraging Pareto grid structuring. Our approach discretizes the Pareto front into a structured grid to guide the LLM in generating high-quality, semantically diverse heuristic rules. These rules are integrated into a lightweight multi-objective evolutionary framework (SEMO) to enable efficient mutation and selection. Experiments across multiple standard MOCOP benchmarks demonstrate that ParetoGrid-LLM achieves solution-set quality (measured by IGD and HV) comparable to classical evolutionary algorithms, while reducing average runtime by 57%. It significantly outperforms existing LLM-driven methods and represents the first approach to achieve Pareto-directed, low-overhead, and highly generalizable automatic generation of multi-objective heuristics.
π Abstract
Multi-objective combinatorial optimization problems (MOCOP) frequently arise in practical applications that require the simultaneous optimization of conflicting objectives. Although traditional evolutionary algorithms can be effective, they typically depend on domain knowledge and repeated parameter tuning, limiting flexibility when applied to unseen MOCOP instances. Recently, integration of Large Language Models (LLMs) into evolutionary computation has opened new avenues for automatic heuristic generation, using their advanced language understanding and code synthesis capabilities. Nevertheless, most existing approaches predominantly focus on single-objective tasks, often neglecting key considerations such as runtime efficiency and heuristic diversity in multi-objective settings. To bridge this gap, we introduce Multi-heuristics for MOCOP via Pareto-Grid-guided Evolution of LLMs (MPaGE), a novel enhancement of the Simple Evolutionary Multiobjective Optimization (SEMO) framework that leverages LLMs and Pareto Front Grid (PFG) technique. By partitioning the objective space into grids and retaining top-performing candidates to guide heuristic generation, MPaGE utilizes LLMs to prioritize heuristics with semantically distinct logical structures during variation, thus promoting diversity and mitigating redundancy within the population. Through extensive evaluations, MPaGE demonstrates superior performance over existing LLM-based frameworks, and achieves competitive results to traditional Multi-objective evolutionary algorithms (MOEAs), with significantly faster runtime. Our code is available at: https://github.com/langkhachhoha/MPaGE.