Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders

📅 2023-02-01
🏛️ arXiv.org
📈 Citations: 12
Influential: 2
📄 PDF
🤖 AI Summary
Offline reinforcement learning (RL) in high-stakes domains such as medicine suffers from sequential exogenous unobserved confounding, inducing bias in policy evaluation and optimization. This work addresses robust policy evaluation and optimization under a sensitivity model. We propose Orthogonalized Robust Fitted Q-Iteration (ORFQI), the first algorithm to embed the closed-form solution of the robust Bellman operator into a loss minimization framework via orthogonalization. Additionally, we introduce bias-corrected quantile regression to enhance statistical robustness and computational efficiency. Theoretically, we derive finite-sample complexity bounds for the estimator. Empirically, we validate our approach on real-world longitudinal sepsis data, demonstrating significantly improved robustness in policy evaluation. Moreover, our conservative confidence bounds enable warm-starting optimistic online RL, facilitating safer deployment in clinical settings.
📝 Abstract
Offline reinforcement learning is important in domains such as medicine, economics, and e-commerce where online experimentation is costly, dangerous or unethical, and where the true model is unknown. However, most methods assume all covariates used in the behavior policy's action decisions are observed. Though this assumption, sequential ignorability/unconfoundedness, likely does not hold in observational data, most of the data that accounts for selection into treatment may be observed, motivating sensitivity analysis. We study robust policy evaluation and policy optimization in the presence of sequentially-exogenous unobserved confounders under a sensitivity model. We propose and analyze orthogonalized robust fitted-Q-iteration that uses closed-form solutions of the robust Bellman operator to derive a loss minimization problem for the robust Q function, and adds a bias-correction to quantile estimation. Our algorithm enjoys the computational ease of fitted-Q-iteration and statistical improvements (reduced dependence on quantile estimation error) from orthogonalization. We provide sample complexity bounds, insights, and show effectiveness both in simulations and on real-world longitudinal healthcare data of treating sepsis. In particular, our model of sequential unobserved confounders yields an online Markov decision process, rather than partially observed Markov decision process: we illustrate how this can enable warm-starting optimistic reinforcement learning algorithms with valid robust bounds from observational data.
Problem

Research questions and friction points this paper is trying to address.

Addresses offline RL with unobserved confounders in sequential decision-making
Proposes robust policy evaluation under sensitivity models for observational data
Enables valid policy optimization from healthcare and economic observational datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonalized robust fitted-Q-iteration with closed-form solutions
Bias-correction added to quantile estimation methods
Converts sequential unobserved confounders to online MDP framework
🔎 Similar Papers
No similar papers found.