๐ค AI Summary
This paper investigates the theoretical advantages and empirical efficacy of jointly learning asymmetric state-value (V) and action-value (Q or advantage A) functions in temporal-difference control. We propose Regularized Dual Q-learning (RDQ), a framework featuring dual learning pathwaysโQ-V and A-Vโand provide theoretical convergence guarantees. Our analysis reveals that using the state-value function as an intermediate representation improves sample efficiency and stability in action-value estimation. Experiments on the MinAtar benchmark demonstrate that RDQ significantly outperforms Dueling DQN; AV-learning achieves superior control performance, while both QV- and AV-based methods surpass Expected Sarsa in value prediction accuracy. The core contribution is the formal justification of asymmetric dual-value-function co-learning, establishing a novel paradigm that unifies theoretical soundness with empirical gains in deep reinforcement learning.
๐ Abstract
The hallmark feature of temporal-difference (TD) learning is bootstrapping: using value predictions to generate new value predictions. The vast majority of TD methods for control learn a policy by bootstrapping from a single action-value function (e.g., Q-learning and Sarsa). Significantly less attention has been given to methods that bootstrap from two asymmetric value functions: i.e., methods that learn state values as an intermediate step in learning action values. Existing algorithms in this vein can be categorized as either QV-learning or AV-learning. Though these algorithms have been investigated to some degree in prior work, it remains unclear if and when it is advantageous to learn two value functions instead of just one -- and whether such approaches are theoretically sound in general. In this paper, we analyze these algorithmic families in terms of convergence and sample efficiency. We find that while both families are more efficient than Expected Sarsa in the prediction setting, only AV-learning methods offer any major benefit over Q-learning in the control setting. Finally, we introduce a new AV-learning algorithm called Regularized Dueling Q-learning (RDQ), which significantly outperforms Dueling DQN in the MinAtar benchmark.