Efficient Online RL Fine Tuning with Offline Pre-trained Policy Only

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing offline-to-online reinforcement learning (RL) methods rely on pre-trained Q-functions, which often suffer from Q-value underestimation and consequently suppress exploration; moreover, they cannot be applied when only imitation-learning policies—without any Q-function—are available. Method: We propose Policy-Only Reinforcement Learning (PORL), a purely policy-driven online RL fine-tuning framework. PORL abandons conservative Q-function initialization, starting instead from a behavior-cloning (BC) policy and employing zero-initialization of the Q-function, an adaptive exploration mechanism, and a novel offline-to-online policy transfer technique. Contribution/Results: On multiple benchmark tasks, PORL significantly improves both the online optimization efficiency and final performance of BC policies, matching state-of-the-art offline-to-online RL algorithms. Crucially, it enables robust online RL deployment without requiring any pre-trained Q-function—a capability not previously achieved.

Technology Category

Application Category

📝 Abstract
Improving the performance of pre-trained policies through online reinforcement learning (RL) is a critical yet challenging topic. Existing online RL fine-tuning methods require continued training with offline pretrained Q-functions for stability and performance. However, these offline pretrained Q-functions commonly underestimate state-action pairs beyond the offline dataset due to the conservatism in most offline RL methods, which hinders further exploration when transitioning from the offline to the online setting. Additionally, this requirement limits their applicability in scenarios where only pre-trained policies are available but pre-trained Q-functions are absent, such as in imitation learning (IL) pre-training. To address these challenges, we propose a method for efficient online RL fine-tuning using solely the offline pre-trained policy, eliminating reliance on pre-trained Q-functions. We introduce PORL (Policy-Only Reinforcement Learning Fine-Tuning), which rapidly initializes the Q-function from scratch during the online phase to avoid detrimental pessimism. Our method not only achieves competitive performance with advanced offline-to-online RL algorithms and online RL approaches that leverage data or policies prior, but also pioneers a new path for directly fine-tuning behavior cloning (BC) policies.
Problem

Research questions and friction points this paper is trying to address.

Improving pre-trained policies via online RL without Q-functions
Overcoming offline RL conservatism hindering online exploration
Enabling fine-tuning for imitation learning policies lacking Q-functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses offline pre-trained policy only
Initializes Q-function from scratch
Eliminates need for pre-trained Q-functions
🔎 Similar Papers
No similar papers found.
W
Wei Xiao
Westlake University
J
Jiacheng Liu
Westlake University
Zifeng Zhuang
Zifeng Zhuang
Westlake University
Reinforcement Learning
R
Runze Suo
Westlake University
Shangke Lyu
Shangke Lyu
Westlake University
Robot controlLearning controlHuman-robot Interaction
D
Donglin Wang
Westlake University