🤖 AI Summary
Offline policy evaluation and learning face challenges including high variance, poor-quality propensity scores, and heavy-tailed reward distributions. To address these, this paper introduces the log-sum-exp (LSE) operator into offline policy estimation for the first time, proposing a novel LSE-weighted estimator. The method inherits the unbiasedness of inverse propensity scoring (IPS) while gaining robustness to heavy-tailed rewards via the LSE operator. Under the minimal assumption that rewards possess only a finite (1+ε)-th moment, we establish an optimal convergence rate of O(n^{-ε/(1+ε)}); derive tight bias-variance upper bounds; and provide a provable regret bound. Experiments demonstrate that, compared to IPS and other baselines, our approach achieves significant improvements in both policy evaluation accuracy and learned policy performance.
📝 Abstract
Off-policy learning and evaluation leverage logged bandit feedback datasets, which contain context, action, propensity score, and feedback for each data point. These scenarios face significant challenges due to high variance and poor performance with low-quality propensity scores and heavy-tailed reward distributions. We address these issues by introducing a novel estimator based on the log-sum-exponential (LSE) operator, which outperforms traditional inverse propensity score estimators. Our LSE estimator demonstrates variance reduction and robustness under heavy-tailed conditions. For off-policy evaluation, we derive upper bounds on the estimator's bias and variance. In the off-policy learning scenario, we establish bounds on the regret -- the performance gap between our LSE estimator and the optimal policy -- assuming bounded $(1+epsilon)$-th moment of weighted reward. Notably, we achieve a convergence rate of $O(n^{-epsilon/(1+ epsilon)})$ for the regret bounds, where $epsilon in [0,1]$ and $n$ is the size of logged bandit feedback dataset. Theoretical analysis is complemented by comprehensive empirical evaluations in both off-policy learning and evaluation scenarios, confirming the practical advantages of our approach. The code for our estimator is available at the following link: https://github.com/armin-behnamnia/lse-offpolicy-learning.