Action-Free Offline-to-Online RL via Discretised State Policies

๐Ÿ“… 2026-01-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge in offline reinforcement learning where action labels are absent, leaving only stateโ€“rewardโ€“next-state tuples. To circumvent this limitation, the authors propose a novel state-policy learning paradigm that directly recommends desirable next states through state discretization, rather than predicting actions as in conventional policy learning. The approach integrates discrete state representations, value function learning via a DecQN algorithm, and regularization techniques to enable pretraining on action-free data. Furthermore, an online guidance mechanism is introduced to accelerate subsequent online learning. Experimental results demonstrate that the proposed method significantly improves both the convergence speed and final performance of online reinforcement learning, highlighting the critical roles of state discretization and regularization within this new paradigm.

Technology Category

Application Category

๐Ÿ“ Abstract
Most existing offline RL methods presume the availability of action labels within the dataset, but in many practical scenarios, actions may be missing due to privacy, storage, or sensor limitations. We formalise the setting of action-free offline-to-online RL, where agents must learn from datasets consisting solely of $(s,r,s')$ tuples and later leverage this knowledge during online interaction. To address this challenge, we propose learning state policies that recommend desirable next-state transitions rather than actions. Our contributions are twofold. First, we introduce a simple yet novel state discretisation transformation and propose Offline State-Only DecQN (\algo), a value-based algorithm designed to pre-train state policies from action-free data. \algo{} integrates the transformation to scale efficiently to high-dimensional problems while avoiding instability and overfitting associated with continuous state prediction. Second, we propose a novel mechanism for guided online learning that leverages these pre-trained state policies to accelerate the learning of online agents. Together, these components establish a scalable and practical framework for leveraging action-free datasets to accelerate online RL. Empirical results across diverse benchmarks demonstrate that our approach improves convergence speed and asymptotic performance, while analyses reveal that discretisation and regularisation are critical to its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
action-free
state-only data
offline-to-online RL
missing actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

action-free reinforcement learning
state discretisation
offline-to-online RL
state policy
DecQN
๐Ÿ”Ž Similar Papers
No similar papers found.