$eta$-DQN: Improving Deep Q-Learning By Evolving the Behavior

📅 2025-01-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low exploration efficiency, poor policy interpretability, and high computational overhead in reinforcement learning, this paper proposes β-DQN. It models state-dependent action generation probabilities via a behavior function β to construct a diverse population of policies and introduces an adaptive meta-controller for online, policy-level selection. Methodologically, it innovatively integrates a lightweight, behavior-statistics-based exploration mechanism, state-action coverage guidance, and Q-value overestimation correction—jointly enhancing exploration quality and interpretability. Empirically, β-DQN significantly outperforms baselines—including ε-greedy, NoisyNet, and Bootstrapped DQN—across diverse exploration-intensive tasks. It achieves improved sample efficiency and final performance with minimal additional computational cost, demonstrating both the effectiveness and practicality of policy-level exploration modeling.

Technology Category

Application Category

📝 Abstract
While many sophisticated exploration methods have been proposed, their lack of generality and high computational cost often lead researchers to favor simpler methods like $epsilon$-greedy. Motivated by this, we introduce $eta$-DQN, a simple and efficient exploration method that augments the standard DQN with a behavior function $eta$. This function estimates the probability that each action has been taken at each state. By leveraging $eta$, we generate a population of diverse policies that balance exploration between state-action coverage and overestimation bias correction. An adaptive meta-controller is designed to select an effective policy for each episode, enabling flexible and explainable exploration. $eta$-DQN is straightforward to implement and adds minimal computational overhead to the standard DQN. Experiments on both simple and challenging exploration domains show that $eta$-DQN outperforms existing baseline methods across a wide range of tasks, providing an effective solution for improving exploration in deep reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning Efficiency
Reinforcement Learning
Exploration-Exploitation Balance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Beta-DQN
Policy Diversification
Automatic Strategy Selection
🔎 Similar Papers
No similar papers found.
H
Hongming Zhang
University of Alberta and Amii, Edmonton, Canada
Fengshuo Bai
Fengshuo Bai
Shanghai Jiao Tong University
Embodied AIAI AlignmentReinforcement LearningPreference-based Learning
C
Chenjun Xiao
CUHK-Shenzhen, Shenzhen, China
C
Chao Gao
Edmonton Research Center, Huawei, Edmonton, Canada
B
Bo Xu
CASIA, Beijing, China
M
Martin Müller
University of Alberta and Amii, Edmonton, Canada