Multiple-Frequencies Population-Based Training

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) is highly sensitive to hyperparameters, leading to unstable and inefficient training. While population-based training (PBT) enables dynamic hyperparameter scheduling, its high-frequency greedy selection often traps optimization in local optima, resulting in inferior long-term performance compared to random search. To address this, we propose Multi-Frequency Population-Based Training (MF-PBT), a novel evolutionary framework featuring decoupled multi-frequency subpopulations and an asymmetric cross-frequency migration mechanism—explicitly separating short-term exploration from long-term convergence and mitigating PBT’s inherent evolutionary greediness. MF-PBT integrates population evolution, adaptive hyperparameter scheduling, and the Brax physics simulation environment. Evaluated on the Brax benchmark, MF-PBT achieves significantly improved sample efficiency and superior asymptotic policy performance, consistently outperforming both random search and standard PBT without additional hyperparameter tuning.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning's high sensitivity to hyperparameters is a source of instability and inefficiency, creating significant challenges for practitioners. Hyperparameter Optimization (HPO) algorithms have been developed to address this issue, among them Population-Based Training (PBT) stands out for its ability to generate hyperparameters schedules instead of fixed configurations. PBT trains a population of agents, each with its own hyperparameters, frequently ranking them and replacing the worst performers with mutations of the best agents. These intermediate selection steps can cause PBT to focus on short-term improvements, leading it to get stuck in local optima and eventually fall behind vanilla Random Search over longer timescales. This paper studies how this greediness issue is connected to the choice of evolution frequency, the rate at which the selection is done. We propose Multiple-Frequencies Population-Based Training (MF-PBT), a novel HPO algorithm that addresses greediness by employing sub-populations, each evolving at distinct frequencies. MF-PBT introduces a migration process to transfer information between sub-populations, with an asymmetric design to balance short and long-term optimization. Extensive experiments on the Brax suite demonstrate that MF-PBT improves sample efficiency and long-term performance, even without actually tuning hyperparameters.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning's hyperparameter sensitivity causes instability and inefficiency
Population-Based Training (PBT) suffers from short-term greediness and local optima
MF-PBT addresses greediness via multi-frequency sub-populations and migration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sub-populations evolve at distinct frequencies
Migration process transfers information between sub-populations
Asymmetric design balances short and long-term optimization
🔎 Similar Papers
No similar papers found.