🤖 AI Summary
This work addresses the fragmentation and inadequacy of existing coverage definitions in linear off-policy evaluation (OPE) under the minimal setting where only the target value function is assumed to be linearly representable. By adopting an instrumental variable perspective, we conduct a finite-sample analysis of the LSTDQ algorithm and introduce “feature dynamic coverage” as a unified coverage measure. This new notion naturally subsumes prior definitions—such as those under stronger assumptions like Bellman completeness—and for the first time unifies the concept of coverage in linear OPE. Leveraging this framework, we derive error bounds that depend on the proposed coverage parameter, achieving tight statistical rates under minimal linear realizability and recovering classical results under stronger assumptions, thereby demonstrating the generality and consistency of our approach.
📝 Abstract
Off-policy evaluation (OPE) is a fundamental task in reinforcement learning (RL). In the classic setting of linear OPE, finite-sample guarantees often take the form $$ \textrm{Evaluation error} \le \textrm{poly}(C^\pi, d, 1/n,\log(1/\delta)), $$ where $d$ is the dimension of the features and $C^\pi$ is a coverage parameter that characterizes the degree to which the visited features lie in the span of the data distribution. While such guarantees are well-understood for several popular algorithms under stronger assumptions (e.g. Bellman completeness), the understanding is lacking and fragmented in the minimal setting where only the target value function is linearly realizable in the features. Despite recent interest in tight characterizations of the statistical rate in this setting, the right notion of coverage remains unclear, and candidate definitions from prior analyses have undesirable properties and are starkly disconnected from more standard definitions in the literature. We provide a novel finite-sample analysis of a canonical algorithm for this setting, LSTDQ. Inspired by an instrumental-variable view, we develop error bounds that depend on a novel coverage parameter, the feature-dynamics coverage, which can be interpreted as linear coverage in an induced dynamical system for feature evolution. With further assumptions -- such as Bellman-completeness -- our definition successfully recovers the coverage parameters specialized to those settings, finally yielding a unified understanding for coverage in linear OPE.