π€ AI Summary
This work addresses the sample complexity of offline KL-regularized contextual bandits under single-policy concentrability. We improve the existing upper bound from $ ilde{O}(varepsilon^{-2})$ to $ ilde{O}(varepsilon^{-1})$, achieving near-optimal convergence rateβthe first such result. Our method introduces a novel covariance-based analysis framework leveraging pessimistic estimation and the strong convexity of the KL divergence, thereby avoiding reliance on uniform control over function classes. Through KL-regularized policy optimization, conditional non-negative gap control, and derivation of a covariance-type risk upper bound, we rigorously establish the $ ilde{O}(varepsilon^{-1})$ upper bound and match it with a tight $ ilde{Omega}(varepsilon^{-1})$ lower bound. Furthermore, we extend our approach to contextual dueling bandits, preserving the same optimal sample complexity.
π Abstract
KL-regularized policy optimization has become a workhorse in learning-based decision making, while its theoretical understanding is still very limited. Although recent progress has been made towards settling the sample complexity of KL-regularized contextual bandits, existing sample complexity bounds are either $ ilde{O}(epsilon^{-2})$ under single-policy concentrability or $ ilde{O}(epsilon^{-1})$ under all-policy concentrability. In this paper, we propose the emph{first} algorithm with $ ilde{O}(epsilon^{-1})$ sample complexity under single-policy concentrability for offline contextual bandits. Our algorithm is designed for general function approximation and based on the principle of emph{pessimism in the face of uncertainty}. The core of our proof leverages the strong convexity of the KL regularization, and the conditional non-negativity of the gap between the true reward and its pessimistic estimator to refine a mean-value-type risk upper bound to its extreme. This in turn leads to a novel covariance-based analysis, effectively bypassing the need for uniform control over the discrepancy between any two functions in the function class. The near-optimality of our algorithm is demonstrated by an $ ilde{Omega}(epsilon^{-1})$ lower bound. Furthermore, we extend our algorithm to contextual dueling bandits and achieve a similar nearly optimal sample complexity.