Efficient Multi-Policy Evaluation for Reinforcement Learning

📅 2024-08-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Off-policy evaluation (OPE) for multiple target policies suffers from low sample efficiency, high variance, and the need for repeated data collection. Method: This paper proposes a unified off-policy evaluation framework based on a shared behavior policy. Contribution/Results: We theoretically establish, for the first time, the existence of a single customized behavior policy enabling consistent, unbiased estimation of multiple target policies using significantly fewer samples than individual on-policy rollouts. Building on this insight, we develop an optimization method that jointly minimizes estimation variance via importance sampling and derives a low-variance estimator supporting simultaneous multi-policy evaluation. Experiments across standard benchmarks demonstrate substantial improvements: average variance reduction of 35%–62% over state-of-the-art methods and 2–5× higher sample efficiency, achieving new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
To unbiasedly evaluate multiple target policies, the dominant approach among RL practitioners is to run and evaluate each target policy separately. However, this evaluation method is far from efficient because samples are not shared across policies, and running target policies to evaluate themselves is actually not optimal. In this paper, we address these two weaknesses by designing a tailored behavior policy to reduce the variance of estimators across all target policies. Theoretically, we prove that executing this behavior policy with manyfold fewer samples outperforms on-policy evaluation on every target policy under characterized conditions. Empirically, we show our estimator has a substantially lower variance compared with previous best methods and achieves state-of-the-art performance in a broad range of environments.
Problem

Research questions and friction points this paper is trying to address.

Efficient Reinforcement Learning Evaluation
Result Variability Reduction
Data Reusability for Policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Evaluation Strategy
Stability Enhancement
🔎 Similar Papers
S
Shuze Liu
Department of Computer Science, University of Virginia
Y
Yuxin Chen
School of Arts and Science, University of Virginia
Shangtong Zhang
Shangtong Zhang
University of Virginia
reinforcement learningstochastic approximation