🤖 AI Summary
This paper addresses the lack of a unified theoretical framework for distributed off-policy evaluation (OPE). To this end, we propose the Fitted Distributional Evaluation (FDE) framework, which generalizes fitted Q-evaluation to return distribution estimation. Leveraging properties of the distributional Bellman operator, FDE integrates distributional reinforcement learning, offline policy evaluation, and function approximation to yield a provably convergent iterative algorithm applicable to non-tabular settings. FDE is the first systematic framework to formalize design principles for distributional OPE methods, unifying existing disparate approaches and filling a critical gap in principled theoretical foundations for distributional OPE. Experiments on LQR and Atari benchmarks demonstrate that FDE achieves significantly higher estimation accuracy and stability than state-of-the-art methods, validating its effectiveness and generalizability.
📝 Abstract
In reinforcement learning, distributional off-policy evaluation (OPE) focuses on estimating the return distribution of a target policy using offline data collected under a different policy. This work focuses on extending the widely used fitted-Q evaluation -- developed for expectation-based reinforcement learning -- to the distributional OPE setting. We refer to this extension as fitted distributional evaluation (FDE). While only a few related approaches exist, there remains no unified framework for designing FDE methods. To fill this gap, we present a set of guiding principles for constructing theoretically grounded FDE methods. Building on these principles, we develop several new FDE methods with convergence analysis and provide theoretical justification for existing methods, even in non-tabular environments. Extensive experiments, including simulations on linear quadratic regulators and Atari games, demonstrate the superior performance of the FDE methods.