🤖 AI Summary
Existing offline-to-online reinforcement learning (RL) methods rely on pre-trained Q-functions, which often suffer from Q-value underestimation and consequently suppress exploration; moreover, they cannot be applied when only imitation-learning policies—without any Q-function—are available. Method: We propose Policy-Only Reinforcement Learning (PORL), a purely policy-driven online RL fine-tuning framework. PORL abandons conservative Q-function initialization, starting instead from a behavior-cloning (BC) policy and employing zero-initialization of the Q-function, an adaptive exploration mechanism, and a novel offline-to-online policy transfer technique. Contribution/Results: On multiple benchmark tasks, PORL significantly improves both the online optimization efficiency and final performance of BC policies, matching state-of-the-art offline-to-online RL algorithms. Crucially, it enables robust online RL deployment without requiring any pre-trained Q-function—a capability not previously achieved.
📝 Abstract
Improving the performance of pre-trained policies through online reinforcement learning (RL) is a critical yet challenging topic. Existing online RL fine-tuning methods require continued training with offline pretrained Q-functions for stability and performance. However, these offline pretrained Q-functions commonly underestimate state-action pairs beyond the offline dataset due to the conservatism in most offline RL methods, which hinders further exploration when transitioning from the offline to the online setting. Additionally, this requirement limits their applicability in scenarios where only pre-trained policies are available but pre-trained Q-functions are absent, such as in imitation learning (IL) pre-training. To address these challenges, we propose a method for efficient online RL fine-tuning using solely the offline pre-trained policy, eliminating reliance on pre-trained Q-functions. We introduce PORL (Policy-Only Reinforcement Learning Fine-Tuning), which rapidly initializes the Q-function from scratch during the online phase to avoid detrimental pessimism. Our method not only achieves competitive performance with advanced offline-to-online RL algorithms and online RL approaches that leverage data or policies prior, but also pioneers a new path for directly fine-tuning behavior cloning (BC) policies.