🤖 AI Summary
To address low sample efficiency and insufficient exploration in deep reinforcement learning, this paper proposes a unified framework integrating offline pretraining with online fine-tuning. Methodologically, we design a meta-policy mechanism that seamlessly unifies offline and online trajectories; its theoretical analysis demonstrates provable exploration enhancement. We further introduce out-of-distribution (OOD) action suppression—incorporating the conservative principles of BCQ and BEAR—achieving Q-function stability without auxiliary modules. Evaluated on 28 tasks from D4RL and V-D4RL, our approach consistently surpasses state-of-the-art offline and hybrid RL methods in performance, while exhibiting lower computational overhead, superior generalization, and enhanced training stability. Key contributions include: (i) the first meta-policy-driven hybrid RL paradigm; (ii) a theoretically grounded exploration augmentation mechanism; and (iii) a lightweight, robust Q-learning implementation.
📝 Abstract
Sample efficiency and exploration remain critical challenges in Deep Reinforcement Learning (DRL), particularly in complex domains. Offline RL, which enables agents to learn optimal policies from static, pre-collected datasets, has emerged as a promising alternative. However, offline RL is constrained by issues such as out-of-distribution (OOD) actions that limit policy performance and generalization. To overcome these limitations, we propose Meta Offline-Online Reinforcement Learning (MOORL), a hybrid framework that unifies offline and online RL for efficient and scalable learning. While previous hybrid methods rely on extensive design components and added computational complexity to utilize offline data effectively, MOORL introduces a meta-policy that seamlessly adapts across offline and online trajectories. This enables the agent to leverage offline data for robust initialization while utilizing online interactions to drive efficient exploration. Our theoretical analysis demonstrates that the hybrid approach enhances exploration by effectively combining the complementary strengths of offline and online data. Furthermore, we demonstrate that MOORL learns a stable Q-function without added complexity. Extensive experiments on 28 tasks from the D4RL and V-D4RL benchmarks validate its effectiveness, showing consistent improvements over state-of-the-art offline and hybrid RL baselines. With minimal computational overhead, MOORL achieves strong performance, underscoring its potential for practical applications in real-world scenarios.