Prioritized Generative Replay

📅 2024-10-23
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In online reinforcement learning, uniform experience replay is inefficient, while conventional prioritized replay risks overfitting due to excessive sampling of sparse high-value transitions. To address this, we propose a generative experience replay buffer grounded in conditional diffusion models, guided by a differentiable relevance function jointly driven by curiosity and value estimation—replacing explicit priority-based sampling with controllable synthesis of high-value, diverse experiences. This work represents the first deep integration of prioritized replay principles with parametric generative modeling, enabling explicit diversity control while preserving experience density. Experiments demonstrate significant improvements in sample efficiency and policy performance across both state- and pixel-based environments, support higher update-to-data ratios during training, and effectively mitigate overfitting.

Technology Category

Application Category

📝 Abstract
Sample-efficient online reinforcement learning often uses replay buffers to store experience for reuse when updating the value function. However, uniform replay is inefficient, since certain classes of transitions can be more relevant to learning. While prioritization of more useful samples is helpful, this strategy can also lead to overfitting, as useful samples are likely to be more rare. In this work, we instead propose a prioritized, parametric version of an agent's memory, using generative models to capture online experience. This paradigm enables (1) densification of past experience, with new generations that benefit from the generative model's generalization capacity and (2) guidance via a family of"relevance functions"that push these generations towards more useful parts of an agent's acquired history. We show this recipe can be instantiated using conditional diffusion models and simple relevance functions such as curiosity- or value-based metrics. Our approach consistently improves performance and sample efficiency in both state- and pixel-based domains. We expose the mechanisms underlying these gains, showing how guidance promotes diversity in our generated transitions and reduces overfitting. We also showcase how our approach can train policies with even higher update-to-data ratios than before, opening up avenues to better scale online RL agents.
Problem

Research questions and friction points this paper is trying to address.

Improving sample efficiency in online reinforcement learning
Reducing overfitting in prioritized experience replay
Enhancing diversity in generated transitions using guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prioritized generative replay for efficient learning
Conditional diffusion models for experience generation
Relevance functions guide diverse transition generation