🤖 AI Summary
This paper identifies a fundamental inconsistency between discounted and total return optimization in deep reinforcement learning: increasing the discount factor does not necessarily eliminate policy bias in environments with cyclic state transitions. Method: Using Markov decision process modeling and rigorous theoretical analysis, the authors quantitatively characterize the sources of performance deviation when optimizing discounted returns as a proxy for total returns. They derive two verifiable sufficient conditions under which optimal policies for both objectives coincide, and establish quantitative relationships linking the discount factor to structural properties of the environment—specifically cycle length and reward distribution. Results: Empirical validation on classic control benchmarks and Atari games confirms the effectiveness of the proposed conditions, yielding substantial improvements in total-return performance. The work provides both theoretical foundations and practical guidance for discount factor selection and objective alignment in RL.
📝 Abstract
In deep reinforcement learning applications, maximizing discounted reward is often employed instead of maximizing total reward to ensure the convergence and stability of algorithms, even though the performance metric for evaluating the policy remains the total reward. However, the optimal policies corresponding to these two objectives may not always be consistent. To address this issue, we analyzed the suboptimality of the policy obtained through maximizing discounted reward in relation to the policy that maximizes total reward and identified the influence of hyperparameters. Additionally, we proposed sufficient conditions for aligning the optimal policies of these two objectives under various settings. The primary contributions are as follows: We theoretically analyzed the factors influencing performance when using discounted reward as a proxy for total reward, thereby enhancing the theoretical understanding of this scenario. Furthermore, we developed methods to align the optimal policies of the two objectives in certain situations, which can improve the performance of reinforcement learning algorithms.