🤖 AI Summary
Potential-based reward shaping (PBRS) introduces bias in finite-horizon settings, degrading sample efficiency. Method: This paper presents the first systematic analysis of PBRS bias under finite horizons and proposes an automatic potential function construction method integrated with state abstraction: interpretable state abstractions yield theoretically grounded potential functions that intrinsically suppress bias at its source. The approach replaces CNNs with a lightweight fully connected architecture. Results: Evaluated on navigation tasks and three ALE games, it matches CNN-based baselines in performance while reducing model complexity by ~60% in parameter count and improving sample efficiency by 2.1–3.4×. The method achieves synergistic optimization of theoretical interpretability and empirical sample efficiency.
📝 Abstract
The use of Potential Based Reward Shaping (PBRS) has shown great promise in the ongoing research effort to tackle sample inefficiency in Reinforcement Learning (RL). However, the choice of the potential function is critical for this technique to be effective. Additionally, RL techniques are usually constrained to use a finite horizon for computational limitations. This introduces a bias when using PBRS, thus adding an additional layer of complexity. In this paper, we leverage abstractions to automatically produce a"good"potential function. We analyse the bias induced by finite horizons in the context of PBRS producing novel insights. Finally, to asses sample efficiency and performance impact, we evaluate our approach on four environments including a goal-oriented navigation task and three Arcade Learning Environments (ALE) games demonstrating that we can reach the same level of performance as CNN-based solutions with a simple fully-connected network.