A Principled Path to Fitted Distributional Evaluation

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of a unified theoretical framework for distributed off-policy evaluation (OPE). To this end, we propose the Fitted Distributional Evaluation (FDE) framework, which generalizes fitted Q-evaluation to return distribution estimation. Leveraging properties of the distributional Bellman operator, FDE integrates distributional reinforcement learning, offline policy evaluation, and function approximation to yield a provably convergent iterative algorithm applicable to non-tabular settings. FDE is the first systematic framework to formalize design principles for distributional OPE methods, unifying existing disparate approaches and filling a critical gap in principled theoretical foundations for distributional OPE. Experiments on LQR and Atari benchmarks demonstrate that FDE achieves significantly higher estimation accuracy and stability than state-of-the-art methods, validating its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
In reinforcement learning, distributional off-policy evaluation (OPE) focuses on estimating the return distribution of a target policy using offline data collected under a different policy. This work focuses on extending the widely used fitted-Q evaluation -- developed for expectation-based reinforcement learning -- to the distributional OPE setting. We refer to this extension as fitted distributional evaluation (FDE). While only a few related approaches exist, there remains no unified framework for designing FDE methods. To fill this gap, we present a set of guiding principles for constructing theoretically grounded FDE methods. Building on these principles, we develop several new FDE methods with convergence analysis and provide theoretical justification for existing methods, even in non-tabular environments. Extensive experiments, including simulations on linear quadratic regulators and Atari games, demonstrate the superior performance of the FDE methods.
Problem

Research questions and friction points this paper is trying to address.

Extends fitted-Q evaluation to distributional off-policy reinforcement learning
Provides principles for designing fitted distributional evaluation methods
Develops new methods with convergence analysis and theoretical justification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends fitted-Q evaluation to distributional OPE
Introduces theoretical principles for FDE methods
Validates performance via simulations and Atari
🔎 Similar Papers
No similar papers found.
Sungee Hong
Sungee Hong
Texas A&M University
Reinforcement learningDimension reductionFunctional Data Analysis
J
Jiayi Wang
University of Texas at Dallas
Z
Zhengling Qi
George Washington University
R
Raymond Ka Wai Wong
Texas A&M University