Minimax Optimal Strategy for Delayed Observations in Online Reinforcement Learning

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of random observation delays in online reinforcement learning, where the agent observes the current state only after a stochastic number of time steps. The authors propose an efficient algorithm that integrates state augmentation with an upper confidence bound (UCB) mechanism. By modeling the delayed-observation MDP as a tabular Markov decision process with structured unknown dynamics, they establish the first minimax optimality theory for this setting and design an algorithm whose regret matches the theoretical lower bound up to logarithmic factors. Under standard parameters—state space size $S$, action space size $A$, episode length $H$, total number of episodes $K$, and maximum delay $D_{\text{max}}$—the algorithm achieves a regret bound of $\widetilde{O}(H\sqrt{D_{\text{max}} S A K})$, which is provably minimax optimal modulo logarithmic terms.

Technology Category

Application Category

📝 Abstract
We study reinforcement learning with delayed state observation, where the agent observes the current state after some random number of time steps. We propose an algorithm that combines the augmentation method and the upper confidence bound approach. For tabular Markov decision processes (MDPs), we derive a regret bound of $\tilde{\mathcal{O}}(H \sqrt{D_{\max} SAK})$, where $S$ and $A$ are the cardinalities of the state and action spaces, $H$ is the time horizon, $K$ is the number of episodes, and $D_{\max}$ is the maximum length of the delay. We also provide a matching lower bound up to logarithmic factors, showing the optimality of our approach. Our analytical framework formulates this problem as a special case of a broader class of MDPs, where their transition dynamics decompose into a known component and an unknown but structured component. We establish general results for this abstract setting, which may be of independent interest.
Problem

Research questions and friction points this paper is trying to address.

delayed observations
online reinforcement learning
Markov decision processes
regret bound
Innovation

Methods, ideas, or system contributions that make the work stand out.

delayed observations
minimax optimal regret
online reinforcement learning
upper confidence bound
augmented MDP
🔎 Similar Papers
No similar papers found.
H
Harin Lee
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington
Kevin Jamieson
Kevin Jamieson
Associate Professor, University of Washington
Active learningexperimental designbanditsreinforcement learning