🤖 AI Summary
This study investigates whether hyperparameter sensitivity in deep reinforcement learning stems from the inherent nature of the problem or from the training mechanism itself. Within an offline goal-conditioned reinforcement learning framework, the authors systematically evaluate the robustness of HIQL (bootstrapped TD learning) and QRL (quasi-metric representation learning) under varying hyperparameters while controlling data distribution and quality. They introduce a cross-goal gradient alignment diagnostic tool, revealing that hyperparameter sensitivity primarily arises from gradient interference amplified by bootstrapped target functions, rather than being an intrinsic property of reinforcement learning. Experiments demonstrate that QRL exhibits a broad and stable near-optimal region even with limited expert data, whereas HIQL yields sharp, easily drifted optima, thereby showing that redesigning the objective function can effectively mitigate hyperparameter sensitivity.
📝 Abstract
Hyperparameter sensitivity in Deep Reinforcement Learning (RL) is often accepted as unavoidable. However, it remains unclear whether it is intrinsic to the RL problem or exacerbated by specific training mechanisms. We investigate this question in offline goal-conditioned RL, where data distributions are fixed, and non-stationarity can be explicitly controlled via scheduled shifts in data quality. Additionally, we study varying data qualities under both stationary and non-stationary regimes, and cover two representative algorithms: HIQL (bootstrapped TD-learning) and QRL (quasimetric representation learning). Overall, we observe substantially greater robustness to changes in hyperparameter configurations than commonly reported for online RL, even under controlled non-stationarity. Once modest expert data is present ($\approx$ 20\%), QRL maintains broad, stable near-optimal regions, while HIQL exhibits sharp optima that drift significantly across training phases. To explain this divergence, we introduce an inter-goal gradient alignment diagnostic. We find that bootstrapped objectives exhibit stronger destructive gradient interference, which coincides directly with hyperparameter sensitivity. These results suggest that high sensitivity to changes in hyperparameter configurations during training is not inevitable in RL, but is amplified by the dynamics of bootstrapping, offering a pathway toward more robust algorithmic objective design.