Double Q-learning for Value-based Deep Reinforcement Learning, Revisited

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep Q-learning suffers from systematic overestimation bias in value function estimation, and although Double DQN was inspired by Double Q-learning, it fails to faithfully implement its core principle of decoupling action selection from value evaluation. Method: We propose Deep Double Q-Learning (DDQL), a principled extension that strictly adheres to the Double Q-learning framework by maintaining two symmetric, mutually bootstrapping Q-networks—integrated with target networks, experience replay, and a dynamic minibatch sampling strategy—without introducing any additional hyperparameters. Contribution/Results: Evaluated systematically across 57 Atari games, DDQL demonstrates significantly reduced overestimation bias compared to Double DQN, improved training stability, and consistently superior final performance. This work constitutes the first faithful adaptation of Double Q-learning to deep value-based reinforcement learning, offering a theoretically consistent and practically effective solution to the overestimation problem.

Technology Category

Application Category

📝 Abstract
Overestimation is pervasive in reinforcement learning (RL), including in Q-learning, which forms the algorithmic basis for many value-based deep RL algorithms. Double Q-learning is an algorithm introduced to address Q-learning's overestimation by training two Q-functions and using both to de-correlate action-selection and action-evaluation in bootstrap targets. Shortly after Q-learning was adapted to deep RL in the form of deep Q-networks (DQN), Double Q-learning was adapted to deep RL in the form of Double DQN. However, Double DQN only loosely adapts Double Q-learning, forgoing the training of two different Q-functions that bootstrap off one another. In this paper, we study algorithms that adapt this core idea of Double Q-learning for value-based deep RL. We term such algorithms Deep Double Q-learning (DDQL). Our aim is to understand whether DDQL exhibits less overestimation than Double DQN and whether performant instantiations of DDQL exist. We answer both questions affirmatively, demonstrating that DDQL reduces overestimation and outperforms Double DQN in aggregate across 57 Atari 2600 games, without requiring additional hyperparameters. We also study several aspects of DDQL, including its network architecture, replay ratio, and minibatch sampling strategy.
Problem

Research questions and friction points this paper is trying to address.

Address overestimation in deep reinforcement learning
Compare Double DQN and Deep Double Q-learning performance
Evaluate DDQL's effectiveness across Atari 2600 games
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses two Q-functions to reduce overestimation
Adapts Double Q-learning for deep RL
Improves performance without extra hyperparameters
🔎 Similar Papers
No similar papers found.