Analyzing and Bridging the Gap between Maximizing Total Reward and Discounted Reward in Deep Reinforcement Learning

📅 2024-07-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a fundamental inconsistency between discounted and total return optimization in deep reinforcement learning: increasing the discount factor does not necessarily eliminate policy bias in environments with cyclic state transitions. Method: Using Markov decision process modeling and rigorous theoretical analysis, the authors quantitatively characterize the sources of performance deviation when optimizing discounted returns as a proxy for total returns. They derive two verifiable sufficient conditions under which optimal policies for both objectives coincide, and establish quantitative relationships linking the discount factor to structural properties of the environment—specifically cycle length and reward distribution. Results: Empirical validation on classic control benchmarks and Atari games confirms the effectiveness of the proposed conditions, yielding substantial improvements in total-return performance. The work provides both theoretical foundations and practical guidance for discount factor selection and objective alignment in RL.

Technology Category

Application Category

📝 Abstract
In deep reinforcement learning applications, maximizing discounted reward is often employed instead of maximizing total reward to ensure the convergence and stability of algorithms, even though the performance metric for evaluating the policy remains the total reward. However, the optimal policies corresponding to these two objectives may not always be consistent. To address this issue, we analyzed the suboptimality of the policy obtained through maximizing discounted reward in relation to the policy that maximizes total reward and identified the influence of hyperparameters. Additionally, we proposed sufficient conditions for aligning the optimal policies of these two objectives under various settings. The primary contributions are as follows: We theoretically analyzed the factors influencing performance when using discounted reward as a proxy for total reward, thereby enhancing the theoretical understanding of this scenario. Furthermore, we developed methods to align the optimal policies of the two objectives in certain situations, which can improve the performance of reinforcement learning algorithms.
Problem

Research questions and friction points this paper is trying to address.

Analyzes gap between total and discounted reward objectives in RL.
Proposes methods to align objectives in cyclic state environments.
Enhances RL performance by adjusting reward data and terminal state values.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modify terminal state value as hyper-parameter.
Calibrate reward data in trajectories.
Enhance robustness to discount factor.
🔎 Similar Papers
No similar papers found.
S
Shuyu Yin
F
Fei Wen
P
Peilin Liu
T
Tao Luo