🤖 AI Summary
DQN’s target Q-value updates often rely on next states sampled under a policy-inconsistent behavior, leading to high variance and distorted learning signals. To address this, we propose Successor-State-Aware Deep Q-Learning (SADQ), which introduces three key innovations: (i) explicit modeling of stochastic environment dynamics; (ii) expectation-based target Q-value estimation over the successor-state distribution, yielding low-variance, unbiased value updates; and (iii) improved policy consistency via successor-state prediction. SADQ integrates deep Q-networks with stochastic transition modeling and prioritized experience replay. Evaluated on standard RL benchmarks—including Atari and MuJoCo—and real-world vector control tasks, SADQ consistently outperforms DQN and its major variants, achieving faster convergence, enhanced training stability, and superior final performance.
📝 Abstract
Deep Q-Networks (DQNs) estimate future returns by learning from transitions sampled from a replay buffer. However, the target updates in DQN often rely on next states generated by actions from past, potentially suboptimal, policy. As a result, these states may not provide informative learning signals, causing high variance into the update process. This issue is exacerbated when the sampled transitions are poorly aligned with the agent's current policy. To address this limitation, we propose the Successor-state Aggregation Deep Q-Network (SADQ), which explicitly models environment dynamics using a stochastic transition model. SADQ integrates successor-state distributions into the Q-value estimation process, enabling more stable and policy-aligned value updates. Additionally, it explores a more efficient action selection strategy with the modeled transition structure. We provide theoretical guarantees that SADQ maintains unbiased value estimates while reducing training variance. Our extensive empirical results across standard RL benchmarks and real-world vector-based control tasks demonstrate that SADQ consistently outperforms DQN variants in both stability and learning efficiency.