Deep Reinforcement Learning for Dynamic Algorithm Configuration: A Case Study on Optimizing OneMax with the (1+($lambda$,$lambda$))-GA

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the population-size adaptation challenge for the (1+(λ,λ))-GA in Dynamic Algorithm Configuration (DAC) on the OneMax problem. Method: We propose a novel deep reinforcement learning framework that overcomes inherent scalability and training stability limitations of DDQN and PPO. Specifically, we design an adaptive reward shifting mechanism to enhance exploration robustness—enabling generalization across problem scales without instance-specific hyperparameter tuning—and theoretically and empirically demonstrate that undiscounted learning better aligns with the planning structure of this task, implementing adaptive reward distribution shaping via statistical estimation. Contribution/Results: The refined DDQN policy achieves near-theoretical-optimal performance; sample efficiency improves by several orders of magnitude over existing DAC methods; and configuration efficiency is significantly enhanced on large-scale OneMax instances.

Technology Category

Application Category

📝 Abstract
Dynamic Algorithm Configuration (DAC) studies the efficient identification of control policies for parameterized optimization algorithms. Numerous studies have leveraged the robustness of decision-making in Reinforcement Learning (RL) to address the optimization challenges in algorithm configuration. However, applying RL to DAC is challenging and often requires extensive domain expertise. We conduct a comprehensive study of deep-RL algorithms in DAC through a systematic analysis of controlling the population size parameter of the (1+($lambda$,$lambda$))-GA on OneMax instances. Our investigation of DDQN and PPO reveals two fundamental challenges that limit their effectiveness in DAC: scalability degradation and learning instability. We trace these issues to two primary causes: under-exploration and planning horizon coverage, each of which can be effectively addressed through targeted solutions. To address under-exploration, we introduce an adaptive reward shifting mechanism that leverages reward distribution statistics to enhance DDQN agent exploration, eliminating the need for instance-specific hyperparameter tuning and ensuring consistent effectiveness across different problem scales. In dealing with the planning horizon coverage problem, we demonstrate that undiscounted learning effectively resolves it in DDQN, while PPO faces fundamental variance issues that necessitate alternative algorithmic designs. We further analyze the hyperparameter dependencies of PPO, showing that while hyperparameter optimization enhances learning stability, it consistently falls short in identifying effective policies across various configurations. Finally, we demonstrate that DDQN equipped with our adaptive reward shifting strategy achieves performance comparable to theoretically derived policies with vastly improved sample efficiency, outperforming prior DAC approaches by several orders of magnitude.
Problem

Research questions and friction points this paper is trying to address.

Optimizing algorithm parameters dynamically using reinforcement learning.
Addressing scalability and instability in deep-RL for algorithm configuration.
Improving exploration and planning in DAC for evolutionary algorithms.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive reward shifting mechanism enhances exploration
Undiscounted learning resolves planning horizon coverage
DDQN with adaptive strategy achieves high sample efficiency