MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address training instability in deep reinforcement learning with high update-to-data (UTD) ratios—stemming from poor value function generalization, overestimation bias, and low sample efficiency—we propose a lightweight world-model-embedded TD learning framework. Our method integrates synthetic data generated by a learnable dynamics model directly into the off-policy TD update process without requiring model parameter resets, and introduces a TD-error correction mechanism to mitigate overestimation. Crucially, this integration incurs negligible computational overhead. The framework significantly improves value function generalization and training stability. Evaluated on the most challenging tasks in the DeepMind Control Suite, it achieves state-of-the-art performance, demonstrating superior robustness in low-sample and continual learning settings.

Technology Category

Application Category

📝 Abstract
Building deep reinforcement learning (RL) agents that find a good policy with few samples has proven notoriously challenging. To achieve sample efficiency, recent work has explored updating neural networks with large numbers of gradient steps for every new sample. While such high update-to-data (UTD) ratios have shown strong empirical performance, they also introduce instability to the training process. Previous approaches need to rely on periodic neural network parameter resets to address this instability, but restarting the training process is infeasible in many real-world applications and requires tuning the resetting interval. In this paper, we focus on one of the core difficulties of stable training with limited samples: the inability of learned value functions to generalize to unobserved on-policy actions. We mitigate this issue directly by augmenting the off-policy RL training process with a small amount of data generated from a learned world model. Our method, Model-Augmented Data for TD Learning (MAD-TD), uses small amounts of generated data to stabilize high UTD training and achieve competitive performance on the most challenging tasks in the DeepMind control suite. Our experiments further highlight the importance of employing a good model to generate data, MAD-TD's ability to combat value overestimation, and its practical stability gains for continued learning.
Problem

Research questions and friction points this paper is trying to address.

Stabilizes high update-to-data ratio reinforcement learning training
Addresses value function generalization to unobserved actions
Reduces instability without periodic neural network resets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-augmented data stabilizes high update ratio
Generated data from learned world model mitigates instability
Small generated data combats value overestimation in training
🔎 Similar Papers
No similar papers found.