π€ AI Summary
Offline reinforcement learning (RL) is traditionally constrained by the state-action distribution of the behavior policy, limiting generalization to out-of-distribution (OOD) actions. This work introduces a novel state-constrained offline RL paradigm: policy learning relies solely on the state marginal distribution of the dataset, permitting high-quality yet OOD actions while guaranteeing that subsequent states remain within the support of the state distribution. We theoretically show that this formulation relaxes the policy optimization boundary and improves trajectory composability and generalization. Based on this principle, we propose StaCQβa DQN-based algorithm integrating Bellman error correction with a state-density-aware policy constraint. Evaluated on the D4RL benchmark, StaCQ significantly outperforms leading offline RL methods, establishing the first strong performance baseline for state-constrained offline RL. Both theoretical analysis and empirical results substantiate its effectiveness.
π Abstract
Traditional offline reinforcement learning methods predominantly operate in a batch-constrained setting. This confines the algorithms to a specific state-action distribution present in the dataset, reducing the effects of distributional shift but restricting the algorithm greatly. In this paper, we alleviate this limitation by introducing a novel framework named emph{state-constrained} offline reinforcement learning. By exclusively focusing on the dataset's state distribution, our framework significantly enhances learning potential and reduces previous limitations. The proposed setting not only broadens the learning horizon but also improves the ability to combine different trajectories from the dataset effectively, a desirable property inherent in offline reinforcement learning. Our research is underpinned by solid theoretical findings that pave the way for subsequent advancements in this domain. Additionally, we introduce StaCQ, a deep learning algorithm that is both performance-driven on the D4RL benchmark datasets and closely aligned with our theoretical propositions. StaCQ establishes a strong baseline for forthcoming explorations in state-constrained offline reinforcement learning.