🤖 AI Summary
This work proposes a novel iterative temporal difference (TD) algorithm that addresses the instability of conventional semi-gradient TD methods and the inefficiency of existing gradient-based TD approaches. By introducing full-gradient computation with respect to a moving target within an iterative framework—combined with a Bellman operator-driven sequence of parallel action-value functions—the method achieves both theoretical convergence guarantees and substantially improved learning speed. Empirical evaluations demonstrate that the algorithm matches the sample efficiency of leading semi-gradient methods on complex benchmarks such as Atari, while maintaining stable training dynamics. To the best of our knowledge, this is the first gradient TD method to simultaneously achieve high performance and stability in high-dimensional reinforcement learning tasks.
📝 Abstract
Temporal-difference (TD) learning is highly effective at controlling and evaluating an agent's long-term outcomes. Most approaches in this paradigm implement a semi-gradient update to boost the learning speed, which consists of ignoring the gradient of the bootstrapped estimate. While popular, this type of update is prone to divergence, as Baird's counterexample illustrates. Gradient TD methods were introduced to overcome this issue, but have not been widely used, potentially due to issues with learning speed compared to semi-gradient methods. Recently, iterated TD learning was developed to increase the learning speed of TD methods. For that, it learns a sequence of action-value functions in parallel, where each function is optimized to represent the application of the Bellman operator over the previous function in the sequence. While promising, this algorithm can be unstable due to its semi-gradient nature, as each function tracks a moving target. In this work, we modify iterated TD learning by computing the gradients over those moving targets, aiming to build a powerful gradient TD method that competes with semi-gradient methods. Our evaluation reveals that this algorithm, called Gradient Iterated Temporal-Difference learning, has a competitive learning speed against semi-gradient methods across various benchmarks, including Atari games, a result that no prior work on gradient TD methods has demonstrated.