Adaptive Replay Buffer for Offline-to-Online Reinforcement Learning

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Offline-to-online reinforcement learning (O2O RL) suffers from an imbalance in trade-offs between offline data and online experience: fixed mixing ratios fail to simultaneously ensure early training stability and long-term performance improvement. To address this, we propose the Adaptive Replay Buffer (ARB), a training-free mechanism that introduces the first trajectory-level, learning-agnostic on-policyness metric based on policy consistency. ARB enables real-time, lightweight dynamic adjustment of sampling weights within the replay buffer as the policy evolves. It relies solely on behavior cloning error and action-output alignment—requiring no additional model training or gradient computation—and is fully compatible with mainstream O2O algorithms (e.g., CQL, BEAR). Evaluated on the D4RL benchmark, ARB significantly mitigates early performance degradation and improves final returns by an average of 18.7%. It exhibits strong generalization across tasks and incurs negligible computational overhead.

Technology Category

Application Category

📝 Abstract
Offline-to-Online Reinforcement Learning (O2O RL) faces a critical dilemma in balancing the use of a fixed offline dataset with newly collected online experiences. Standard methods, often relying on a fixed data-mixing ratio, struggle to manage the trade-off between early learning stability and asymptotic performance. To overcome this, we introduce the Adaptive Replay Buffer (ARB), a novel approach that dynamically prioritizes data sampling based on a lightweight metric we call 'on-policyness'. Unlike prior methods that rely on complex learning procedures or fixed ratios, ARB is designed to be learning-free and simple to implement, seamlessly integrating into existing O2O RL algorithms. It assesses how closely collected trajectories align with the current policy's behavior and assigns a proportional sampling weight to each transition within that trajectory. This strategy effectively leverages offline data for initial stability while progressively focusing learning on the most relevant, high-rewarding online experiences. Our extensive experiments on D4RL benchmarks demonstrate that ARB consistently mitigates early performance degradation and significantly improves the final performance of various O2O RL algorithms, highlighting the importance of an adaptive, behavior-aware replay buffer design.
Problem

Research questions and friction points this paper is trying to address.

Balancing offline and online data in reinforcement learning
Managing trade-off between stability and performance
Adaptively prioritizing data sampling for improved learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Replay Buffer prioritizes data by on-policyness metric
Learning-free design integrates into existing offline-to-online algorithms
Dynamically balances offline stability with high-reward online experiences
🔎 Similar Papers
No similar papers found.