Prioritized Trajectory Replay: A Replay Memory for Data-driven Reinforcement Learning

📅 2023-06-27
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In offline reinforcement learning, conventional single-step transition sampling fails to improve policy performance and often introduces out-of-distribution actions, causing training instability. To address this, we propose Trajectory-level Replay (TR), the first framework to extend prioritized sampling to complete trajectories. TR introduces a reverse-trajectory sampling strategy and a trajectory-level priority metric grounded in both TD error and cumulative return, effectively avoiding out-of-distribution action selection. Furthermore, we incorporate a weighted critic objective to mitigate distributional shift inherent in trajectory-level sampling. Evaluated on the D4RL benchmark, TR consistently enhances state-of-the-art algorithms—including BCQ and CQL—achieving average normalized score improvements of 12%–28%. These results empirically validate the effectiveness and generalizability of trajectory-level data utilization as a novel paradigm for offline RL.
📝 Abstract
In recent years, data-driven reinforcement learning (RL), also known as offline RL, have gained significant attention. However, the role of data sampling techniques in offline RL has been overlooked despite its potential to enhance online RL performance. Recent research suggests applying sampling techniques directly to state-transitions does not consistently improve performance in offline RL. Therefore, in this study, we propose a memory technique, (Prioritized) Trajectory Replay (TR/PTR), which extends the sampling perspective to trajectories for more comprehensive information extraction from limited data. TR enhances learning efficiency by backward sampling of trajectories that optimizes the use of subsequent state information. Building on TR, we build the weighted critic target to avoid sampling unseen actions in offline training, and Prioritized Trajectory Replay (PTR) that enables more efficient trajectory sampling, prioritized by various trajectory priority metrics. We demonstrate the benefits of integrating TR and PTR with existing offline RL algorithms on D4RL. In summary, our research emphasizes the significance of trajectory-based data sampling techniques in enhancing the efficiency and performance of offline RL algorithms.
Problem

Research questions and friction points this paper is trying to address.

Improving offline reinforcement learning with trajectory replay
Enhancing data sampling from limited offline datasets
Prioritizing trajectories to boost RL algorithm efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory Replay for comprehensive offline data extraction
Backward trajectory sampling optimizes subsequent state information usage
Prioritized sampling with trajectory priority metrics enhances efficiency
🔎 Similar Papers
No similar papers found.