Action-Dependent Optimality-Preserving Reward Shaping

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In sparse-reward environments, existing potential-based reward shaping (PBRS) methods—such as GRM and PIES—suffer from low exploration efficiency and policy divergence in long-horizon, high-exploration-difficulty tasks (e.g., Montezuma’s Revenge). To address this, we propose Action-Dependent Optimality-Preserving Shaping (ADOPS), the first reward shaping framework that achieves *action-conditioned* policy-invariant shaping. ADOPS lifts PBRS’s restrictive assumptions on potential function forms, supports non-PBRS intrinsic rewards, and rigorously preserves the original optimal policy set. Grounded in the Bellman optimality principle, we formalize action-conditioned cumulative intrinsic rewards and provide a theoretical proof of policy invariance. Empirical evaluation on Montezuma’s Revenge demonstrates that ADOPS significantly improves both exploration efficiency and final performance, effectively overcoming the failure modes of GRM and PIES in extremely sparse, long-horizon settings.

Technology Category

Application Category

📝 Abstract
Recent RL research has utilized reward shaping--particularly complex shaping rewards such as intrinsic motivation (IM)--to encourage agent exploration in sparse-reward environments. While often effective, ``reward hacking'' can lead to the shaping reward being optimized at the expense of the extrinsic reward, resulting in a suboptimal policy. Potential-Based Reward Shaping (PBRS) techniques such as Generalized Reward Matching (GRM) and Policy-Invariant Explicit Shaping (PIES) have mitigated this. These methods allow for implementing IM without altering optimal policies. In this work we show that they are effectively unsuitable for complex, exploration-heavy environments with long-duration episodes. To remedy this, we introduce Action-Dependent Optimality Preserving Shaping (ADOPS), a method of converting intrinsic rewards to an optimality-preserving form that allows agents to utilize IM more effectively in the extremely sparse environment of Montezuma's Revenge. We also prove ADOPS accommodates reward shaping functions that cannot be written in a potential-based form: while PBRS-based methods require the cumulative discounted intrinsic return be independent of actions, ADOPS allows for intrinsic cumulative returns to be dependent on agents' actions while still preserving the optimal policy set. We show how action-dependence enables ADOPS's to preserve optimality while learning in complex, sparse-reward environments where other methods struggle.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward hacking in RL with intrinsic motivation
Improves exploration in sparse-reward, long-duration environments
Enables action-dependent shaping without policy distortion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Action-Dependent Optimality Preserving Shaping (ADOPS) introduced
ADOPS allows action-dependent intrinsic rewards
ADOPS preserves optimal policies in sparse environments
🔎 Similar Papers
No similar papers found.