Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies offline reinforcement learning in average-reward Markov decision processes (MDPs), addressing theoretical challenges arising from distributional shift and heterogeneous state coverage. Methodologically, it proposes a prior-free algorithm built upon a pessimistic discounted value iteration framework, incorporating quantile-based clipping and empirical span-aware penalty regularization. Theoretically, it establishes the first sample-complexity upper bound that depends solely on properties of the target policy—namely, its bias span and a newly introduced “policy hitting radius”—thereby eliminating reliance on global assumptions such as uniform mixing time. Moreover, it extends theoretical guarantees to general weakly communicating MDPs without structural restrictions. The derived sample complexity nearly matches the information-theoretic lower bound, yielding the tightest known performance guarantee for average-reward offline RL and substantially improving over prior approaches tied to global complexity measures.

Technology Category

Application Category

📝 Abstract
We study offline reinforcement learning in average-reward MDPs, which presents increased challenges from the perspectives of distribution shift and non-uniform coverage, and has been relatively underexamined from a theoretical perspective. While previous work obtains performance guarantees under single-policy data coverage assumptions, such guarantees utilize additional complexity measures which are uniform over all policies, such as the uniform mixing time. We develop sharp guarantees depending only on the target policy, specifically the bias span and a novel policy hitting radius, yielding the first fully single-policy sample complexity bound for average-reward offline RL. We are also the first to handle general weakly communicating MDPs, contrasting restrictive structural assumptions made in prior work. To achieve this, we introduce an algorithm based on pessimistic discounted value iteration enhanced by a novel quantile clipping technique, which enables the use of a sharper empirical-span-based penalty function. Our algorithm also does not require any prior parameter knowledge for its implementation. Remarkably, we show via hard examples that learning under our conditions requires coverage assumptions beyond the stationary distribution of the target policy, distinguishing single-policy complexity measures from previously examined cases. We also develop lower bounds nearly matching our main result.
Problem

Research questions and friction points this paper is trying to address.

Study offline RL in average-reward MDPs with distribution shift challenges
Develop single-policy sample complexity bounds using target policy metrics
Handle weakly communicating MDPs without restrictive structural assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pessimistic discounted value iteration algorithm
Novel quantile clipping technique
Single-policy sample complexity bound
🔎 Similar Papers
No similar papers found.
Matthew Zurek
Matthew Zurek
UW–Madison
G
Guy Zamir
Department of Computer Sciences, University of Wisconsin-Madison
Y
Yudong Chen
Department of Computer Sciences, University of Wisconsin-Madison