🤖 AI Summary
This work proposes a two-level deep reinforcement learning framework for large-scale Traveling Salesman Problems (TSP), wherein a recurrent Proximal Policy Optimization (PPO) agent dynamically controls both numerical and structural parameters of a genetic algorithm, enabling their decoupled analysis. The study provides the first empirical evidence that dynamic adjustment of structural parameters is crucial for avoiding premature convergence and escaping local optima, whereas numerical parameters serve only a fine-tuning role. Evaluated on large-scale TSP instances such as rl5915, the proposed method significantly outperforms static baselines, reducing the optimality gap by approximately 45%. These results offer a novel direction for automated algorithm design through adaptive parameter control in evolutionary computation.
📝 Abstract
Proper parameter configuration is a prerequisite for the success of Evolutionary Algorithms (EAs). While various adaptive strategies have been proposed, it remains an open question whether all control dimensions contribute equally to algorithmic scalability. To investigate this, we categorize control variables into numerical parameters (e.g., crossover and mutation rates) and structural parameters (e.g., population size and operator switching), hypothesizing that they play distinct roles. This paper presents an empirical study utilizing a dual-level Deep Reinforcement Learning (DRL) framework to decouple and analyze the impact of these two dimensions on the Traveling Salesman Problem (TSP). We employ a Recurrent PPO agent to dynamically regulate these parameters, treating the DRL model as a probe to reveal evolutionary dynamics. Experimental results confirm the effectiveness of this approach: the learned policies outperform static baselines, reducing the optimality gap by approximately 45% on the largest tested instance (rl5915). Building on this validated framework, our ablation analysis reveals a fundamental insight: while numerical tuning offers local refinement, structural plasticity is the decisive factor in preventing stagnation and facilitating escape from local optima. These findings suggest that future automated algorithm design should prioritize dynamic structural reconfiguration over fine-grained probability adjustment. To facilitate reproducibility, the source code is available at https://github.com/StarDream1314/DRLGA-TSP