🤖 AI Summary
This paper addresses the limited efficacy of potential-based reward shaping (PBRS) in sparse-reward tasks, revealing its sensitivity to initial Q-values and external rewards. We propose a novel linear translation technique for potential functions that neither modifies initial Q-values nor alters the policy’s preference structure. For the first time, we formally prove the theoretical limitations of continuous potential functions regarding sign assignment, and establish a translation mechanism that rigorously preserves policy invariance. The method significantly improves exploration efficiency while remaining fully compatible with standard deep RL frameworks such as DQN. Empirical evaluation across sparse-reward benchmarks—including Gridworld, CartPole, and MountainCar—demonstrates substantial gains in sample efficiency. Crucially, our theoretical guarantees are empirically validated in deep RL settings, confirming robust and consistent shaping performance.
📝 Abstract
Potential-based reward shaping is commonly used to incorporate prior knowledge of how to solve the task into reinforcement learning because it can formally guarantee policy invariance. As such, the optimal policy and the ordering of policies by their returns are not altered by potential-based reward shaping. In this work, we highlight the dependence of effective potential-based reward shaping on the initial Q-values and external rewards, which determine the agent's ability to exploit the shaping rewards to guide its exploration and achieve increased sample efficiency. We formally derive how a simple linear shift of the potential function can be used to improve the effectiveness of reward shaping without changing the encoded preferences in the potential function, and without having to adjust the initial Q-values, which can be challenging and undesirable in deep reinforcement learning. We show the theoretical limitations of continuous potential functions for correctly assigning positive and negative reward shaping values. We verify our theoretical findings empirically on Gridworld domains with sparse and uninformative reward functions, as well as on the Cart Pole and Mountain Car environments, where we demonstrate the application of our results in deep reinforcement learning.