🤖 AI Summary
To address the curse of dimensionality arising from high-dimensional observations in block Markov decision processes (BMDPs), this work establishes, for the first time, a clustering-based regret analysis framework—demonstrating that accurate recovery of the underlying state block structure significantly improves learning efficiency. We propose a two-stage algorithm: first, latent structure is learned via randomized exploration; second, policy optimization is performed using optimism-in-face-of-uncertainty principles coupled with clustering. Theoretically, our algorithm achieves an asymptotically optimal regret bound of $O(sqrt{T} + n)$ on clusterable BMDPs, improving upon the prior best bound of $O(sqrt{T} + n^2)$. This result bridges a long-standing theoretical gap between structural exploitation and regret analysis in BMDPs and quantifies, for the first time, the sample-efficiency gain conferred by accurate latent-state estimation in reinforcement learning.
📝 Abstract
The curse of dimensionality renders Reinforcement Learning (RL) impractical in many real-world settings with exponentially large state and action spaces. Yet, many environments exhibit exploitable structure that can accelerate learning. To formalize this idea, we study RL in Block Markov Decision Processes (BMDPs). BMDPs model problems with large observation spaces, but where transition dynamics are fully determined by latent states. Recent advances in clustering methods have enabled the efficient recovery of this latent structure. However, a regret analysis that exploits these techniques to determine their impact on learning performance remained open. We are now addressing this gap by providing a regret analysis that explicitly leverages clustering, demonstrating that accurate latent state estimation can indeed effectively speed up learning. Concretely, this paper analyzes a two-phase RL algorithm for BMDPs that first learns the latent structure through random exploration and then switches to an optimism-guided strategy adapted to the uncovered structure. This algorithm achieves a regret that is $O(sqrt{T}+n)$ on a large class of BMDPs susceptible to clustering. Here, $T$ denotes the number of time steps, $n$ is the cardinality of the observation space, and the Landau notation $O(cdot)$ holds up to constants and polylogarithmic factors. This improves the best prior bound, $O(sqrt{T}+n^2)$, especially when $n$ is large. Moreover, we prove that no algorithm can achieve lower regret uniformly on this same class of BMDPs. This establishes that, on this class, the algorithm achieves asymptotic optimality.