π€ AI Summary
Existing policy gradient methods (e.g., PPO, TRPO) perform parameter updates solely along a single stochastic gradient direction, neglecting local geometric structure in parameter space and thus often converging to suboptimal policies. To address this, we propose ExploRLerβa plug-and-play local exploration enhancement framework that, without increasing the number of gradient updates, models the local geometry around policy checkpoints and systematically explores high-return regions within the current update neighborhood. ExploRLer is fully compatible with mainstream on-policy algorithms and requires no modification to existing training pipelines. Empirically, it significantly improves both convergence speed and final performance across multiple challenging continuous-control benchmarks. These results demonstrate that explicitly modeling and leveraging local parameter-space geometry is both effective and essential for optimizing reinforcement learning policies.
π Abstract
Policy-gradient methods such as Proximal Policy Optimization (PPO) are typically updated along a single stochastic gradient direction, leaving the rich local structure of the parameter space unexplored. Previous work has shown that the surrogate gradient is often poorly correlated with the true reward landscape. Building on this insight, we visualize the parameter space spanned by policy checkpoints within an iteration and reveal that higher performing solutions often lie in nearby unexplored regions. To exploit this opportunity, we introduce ExploRLer, a pluggable pipeline that seamlessly integrates with on-policy algorithms such as PPO and TRPO, systematically probing the unexplored neighborhoods of surrogate on-policy gradient updates. Without increasing the number of gradient updates, ExploRLer achieves significant improvements over baselines in complex continuous control environments. Our results demonstrate that iteration-level exploration provides a practical and effective way to strengthen on-policy reinforcement learning and offer a fresh perspective on the limitations of the surrogate objective.