🤖 AI Summary
This work addresses average-reward offline reinforcement learning under weakly connected Markov decision processes (MDPs), establishing— for the first time—the finite-sample complexity bound without assuming ergodicity or linear MDP structure. We propose Anchored Fitted Q-Iteration (Anchored FQI), which introduces an anchor mechanism akin to weight decay to mitigate bias and instability arising from single-trajectory, non-i.i.d. offline data, thereby ensuring stable convergence of policy evaluation under function approximation. Our method breaks away from restrictive assumptions (e.g., ergodicity, linearity) required by prior average-reward offline RL approaches. It delivers the first provable finite-sample complexity guarantee for weakly connected MDPs, significantly enhancing theoretical applicability to realistic non-stationary and sparsely interactive environments.
📝 Abstract
Although there is an extensive body of work characterizing the sample complexity of discounted-return offline RL with function approximations, prior work on the average-reward setting has received significantly less attention, and existing approaches rely on restrictive assumptions, such as ergodicity or linearity of the MDP. In this work, we establish the first sample complexity results for average-reward offline RL with function approximation for weakly communicating MDPs, a much milder assumption. To this end, we introduce Anchored Fitted Q-Iteration, which combines the standard Fitted Q-Iteration with an anchor mechanism. We show that the anchor, which can be interpreted as a form of weight decay, is crucial for enabling finite-time analysis in the average-reward setting. We also extend our finite-time analysis to the setup where the dataset is generated from a single-trajectory rather than IID transitions, again leveraging the anchor mechanism.