🤖 AI Summary
To address training instability in deep reinforcement learning with high update-to-data (UTD) ratios—stemming from poor value function generalization, overestimation bias, and low sample efficiency—we propose a lightweight world-model-embedded TD learning framework. Our method integrates synthetic data generated by a learnable dynamics model directly into the off-policy TD update process without requiring model parameter resets, and introduces a TD-error correction mechanism to mitigate overestimation. Crucially, this integration incurs negligible computational overhead. The framework significantly improves value function generalization and training stability. Evaluated on the most challenging tasks in the DeepMind Control Suite, it achieves state-of-the-art performance, demonstrating superior robustness in low-sample and continual learning settings.
📝 Abstract
Building deep reinforcement learning (RL) agents that find a good policy with few samples has proven notoriously challenging. To achieve sample efficiency, recent work has explored updating neural networks with large numbers of gradient steps for every new sample. While such high update-to-data (UTD) ratios have shown strong empirical performance, they also introduce instability to the training process. Previous approaches need to rely on periodic neural network parameter resets to address this instability, but restarting the training process is infeasible in many real-world applications and requires tuning the resetting interval. In this paper, we focus on one of the core difficulties of stable training with limited samples: the inability of learned value functions to generalize to unobserved on-policy actions. We mitigate this issue directly by augmenting the off-policy RL training process with a small amount of data generated from a learned world model. Our method, Model-Augmented Data for TD Learning (MAD-TD), uses small amounts of generated data to stabilize high UTD training and achieve competitive performance on the most challenging tasks in the DeepMind control suite. Our experiments further highlight the importance of employing a good model to generate data, MAD-TD's ability to combat value overestimation, and its practical stability gains for continued learning.