Deep Reinforcement Learning with Gradient Eligibility Traces

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the divergence issues of off-policy TD methods under nonlinear function approximation and the inefficiency of multi-step credit assignment in deep reinforcement learning, this paper extends the Gradient TD framework to the λ-return multi-step setting for the first time. We formulate the multi-step gradient variant of the Generalized Projected Bellman Error (GPBE), design a forward/backward dual-view algorithm compatible with both experience replay and streaming learning, and introduce a multi-step gradient eligibility trace mechanism tailored for deep neural networks. Theoretical analysis guarantees convergence under standard assumptions. Empirical evaluation on MuJoCo and MinAtar benchmarks demonstrates substantial improvements over PPO and StreamQ in terms of both stability and sample efficiency. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Achieving fast and stable off-policy learning in deep reinforcement learning (RL) is challenging. Most existing methods rely on semi-gradient temporal-difference (TD) methods for their simplicity and efficiency, but are consequently susceptible to divergence. While more principled approaches like Gradient TD (GTD) methods have strong convergence guarantees, they have rarely been used in deep RL. Recent work introduced the Generalized Projected Bellman Error ($GPBE$), enabling GTD methods to work efficiently with nonlinear function approximation. However, this work is only limited to one-step methods, which are slow at credit assignment and require a large number of samples. In this paper, we extend the $GPBE$ objective to support multistep credit assignment based on the $λ$-return and derive three gradient-based methods that optimize this new objective. We provide both a forward-view formulation compatible with experience replay and a backward-view formulation compatible with streaming algorithms. Finally, we evaluate the proposed algorithms and show that they outperform both PPO and StreamQ in MuJoCo and MinAtar environments, respectively. Code available at https://github.com/esraaelelimy/gtd_algos
Problem

Research questions and friction points this paper is trying to address.

Achieving fast stable off-policy deep RL learning
Extending GPBE for multistep credit assignment
Improving convergence over PPO and StreamQ
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends GPBE for multistep credit assignment
Derives three gradient-based optimization methods
Compatible with experience replay and streaming
🔎 Similar Papers
No similar papers found.