On the Complexity of Offline Reinforcement Learning with $Q^\star$-Approximation and Partial Coverage

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates sample-efficient offline reinforcement learning under the assumptions of $Q^*$-realizability and partial coverage. By developing a unified complexity framework based on decision-estimation coefficients and employing a modular analysis of $Q^*$ estimation, it provides the first characterization of offline learnability for general low-Bellman-rank MDPs. The theoretical contributions include establishing an information-theoretic lower bound that refutes certain classical assumptions, introducing a second-order suboptimality gap lemma to sharpen sample complexity bounds, and delivering the first non-tabular theoretical guarantees for both soft Q-learning and Conservative Q-Learning (CQL). Specifically, soft Q-learning is shown to achieve $\varepsilon^{-2}$ sample complexity under partial coverage without any online interaction, while CQL receives its first non-tabular theoretical analysis.

Technology Category

Application Category

📝 Abstract
We study offline reinforcement learning under $Q^\star$-approximation and partial coverage, a setting that motivates practical algorithms such as Conservative $Q$-Learning (CQL; Kumar et al., 2020) but has received limited theoretical attention. Our work is inspired by the following open question:"Are $Q^\star$-realizability and Bellman completeness sufficient for sample-efficient offline RL under partial coverage?"We answer in the negative by establishing an information-theoretic lower bound. Going substantially beyond this, we introduce a general framework that characterizes the intrinsic complexity of a given $Q^\star$ function class, inspired by model-free decision-estimation coefficients (DEC) for online RL (Foster et al., 2023b; Liu et al., 2025b). This complexity recovers and improves the quantities underlying the guarantees of Chen and Jiang (2022) and Uehara et al. (2023), and extends to broader settings. Our decision-estimation decomposition can be combined with a wide range of $Q^\star$ estimation procedures, modularizing and generalizing existing approaches. Beyond the general framework, we make further contributions: By developing a novel second-order performance difference lemma, we obtain the first $\epsilon^{-2}$ sample complexity under partial coverage for soft $Q$-learning, improving the $\epsilon^{-4}$ bound of Uehara et al. (2023). We remove Chen and Jiang's (2022) need for additional online interaction when the value gap of $Q^\star$ is unknown. We also give the first characterization of offline learnability for general low-Bellman-rank MDPs without Bellman completeness (Jiang et al., 2017; Du et al., 2021; Jin et al., 2021), a canonical setting in online RL that remains unexplored in offline RL except for special cases. Finally, we provide the first analysis for CQL under $Q^\star$-realizability and Bellman completeness beyond the tabular case.
Problem

Research questions and friction points this paper is trying to address.

offline reinforcement learning
Q*-realizability
partial coverage
Bellman completeness
sample efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

offline reinforcement learning
Q*-realizability
partial coverage
decision-estimation coefficient
Bellman completeness
🔎 Similar Papers
No similar papers found.