๐ค AI Summary
This work addresses offline continuous-action reinforcement learning under unobserved confoundingโa setting beyond the scope of existing methods, which are largely restricted to discrete actions or POMDP-based policy evaluation. We establish, for the first time, a nonparametric identifiability theory for infinite-horizon policy value estimation in the presence of latent confounders. Building on this, we propose a minimax-optimal estimator for policy value and a differentiable policy gradient optimization algorithm. We provide rigorous theoretical guarantees: consistency of the estimator, finite-sample error bounds, and a regret bound for the learned policy. Empirical evaluation on synthetic benchmarks and the German Socio-Economic Panel (SOEP) household data demonstrates that our method significantly improves both policy evaluation accuracy and optimization performance under confounding, outperforming prior approaches in continuous-action settings.
๐ Abstract
This paper addresses the challenge of offline policy learning in reinforcement learning with continuous action spaces when unmeasured confounders are present. While most existing research focuses on policy evaluation within partially observable Markov decision processes (POMDPs) and assumes discrete action spaces, we advance this field by establishing a novel identification result to enable the nonparametric estimation of policy value for a given target policy under an infinite-horizon framework. Leveraging this identification, we develop a minimax estimator and introduce a policy-gradient-based algorithm to identify the in-class optimal policy that maximizes the estimated policy value. Furthermore, we provide theoretical results regarding the consistency, finite-sample error bound, and regret bound of the resulting optimal policy. Extensive simulations and a real-world application using the German Family Panel data demonstrate the effectiveness of our proposed methodology.