🤖 AI Summary
This work addresses the challenge of random observation delays in online reinforcement learning, where the agent observes the current state only after a stochastic number of time steps. The authors propose an efficient algorithm that integrates state augmentation with an upper confidence bound (UCB) mechanism. By modeling the delayed-observation MDP as a tabular Markov decision process with structured unknown dynamics, they establish the first minimax optimality theory for this setting and design an algorithm whose regret matches the theoretical lower bound up to logarithmic factors. Under standard parameters—state space size $S$, action space size $A$, episode length $H$, total number of episodes $K$, and maximum delay $D_{\text{max}}$—the algorithm achieves a regret bound of $\widetilde{O}(H\sqrt{D_{\text{max}} S A K})$, which is provably minimax optimal modulo logarithmic terms.
📝 Abstract
We study reinforcement learning with delayed state observation, where the agent observes the current state after some random number of time steps. We propose an algorithm that combines the augmentation method and the upper confidence bound approach. For tabular Markov decision processes (MDPs), we derive a regret bound of $\tilde{\mathcal{O}}(H \sqrt{D_{\max} SAK})$, where $S$ and $A$ are the cardinalities of the state and action spaces, $H$ is the time horizon, $K$ is the number of episodes, and $D_{\max}$ is the maximum length of the delay. We also provide a matching lower bound up to logarithmic factors, showing the optimality of our approach. Our analytical framework formulates this problem as a special case of a broader class of MDPs, where their transition dynamics decompose into a known component and an unknown but structured component. We establish general results for this abstract setting, which may be of independent interest.