🤖 AI Summary
In contextually unobserved confounded contextual Markov decision processes (C-MDPs), conventional model-based reinforcement learning suffers from fundamental inconsistency due to mismatch between the behavior policy and the intervention target. To address this, we propose a causally consistent model learning and planning framework: (i) proximal offline policy evaluation via proxy variables; (ii) construction of a behavior-averaged transition model to define an identifiable surrogate MDP; and (iii) joint modeling and optimization grounded in the maximum causal entropy principle. Our approach is the first to enable unbiased state-policy modeling and causally consistent Bellman iteration without observing confounding context. It yields consistent estimators for both reward and transition functions, thereby substantially improving the stability and reliability of policy evaluation and planning under unmeasured confounding.
📝 Abstract
We investigate model-based reinforcement learning in contextual Markov decision processes (C-MDPs) in which the context is unobserved and induces confounding in the offline dataset. In such settings, conventional model-learning methods are fundamentally inconsistent, as the transition and reward mechanisms generated under a behavioral policy do not correspond to the interventional quantities required for evaluating a state-based policy. To address this issue, we adapt a proximal off-policy evaluation approach that identifies the confounded reward expectation using only observable state-action-reward trajectories under mild invertibility conditions on proxy variables. When combined with a behavior-averaged transition model, this construction yields a surrogate MDP whose Bellman operator is well defined and consistent for state-based policies, and which integrates seamlessly with the maximum causal entropy (MaxCausalEnt) model-learning framework. The proposed formulation enables principled model learning and planning in confounded environments where contextual information is unobserved, unavailable, or impractical to collect.