🤖 AI Summary
This work addresses the low sample efficiency of reinforcement learning by studying Exo-MDPs—Markov decision processes with exogenous-endogenous state decomposition: exogenous states evolve stochastically independent of actions, while endogenous states evolve deterministically given actions and states. The core challenge is achieving sample complexity decoupled from the number of actions and the size of the endogenous state space when exogenous states are unobserved. To this end, the authors first establish representational equivalence among discrete MDPs, Exo-MDPs, and linear mixture MDPs. They then propose a novel algorithm combining optimistic policy optimization with distributed exogenous state estimation, achieving a regret bound of $ ilde{O}(H^{3/2} d sqrt{K})$, where $d$ is the exogenous dimension—matching the information-theoretic lower bound and proving optimality. Experiments on inventory control demonstrate several-fold reduction in sample requirements compared to standard RL methods.
📝 Abstract
We study Exo-MDPs, a structured class of Markov Decision Processes (MDPs) where the state space is partitioned into exogenous and endogenous components. Exogenous states evolve stochastically, independent of the agent's actions, while endogenous states evolve deterministically based on both state components and actions. Exo-MDPs are useful for applications including inventory control, portfolio management, and ride-sharing. Our first result is structural, establishing a representational equivalence between the classes of discrete MDPs, Exo-MDPs, and discrete linear mixture MDPs. Specifically, any discrete MDP can be represented as an Exo-MDP, and the transition and reward dynamics can be written as linear functions of the exogenous state distribution, showing that Exo-MDPs are instances of linear mixture MDPs. For unobserved exogenous states, we prove a regret upper bound of $O(H^{3/2}dsqrt{K})$ over $K$ trajectories of horizon $H$, with $d$ as the size of the exogenous state space, and establish nearly-matching lower bounds. Our findings demonstrate how Exo-MDPs decouple sample complexity from action and endogenous state sizes, and we validate our theoretical insights with experiments on inventory control.