🤖 AI Summary
To address the exploration challenge in reinforcement learning under sparse rewards, long horizons, and high stochasticity, this paper proposes an online adaptive exploration mechanism grounded in epistemic uncertainty. Methodologically, it introduces epistemic uncertainty as a real-time navigation signal to guide Bayesian policy updates, enabling joint optimization of posterior sampling and expressible priors within a discounted infinite-horizon MDP framework. Theoretically, it establishes a near-minimax-optimal regret bound and provides rigorous sample complexity guarantees. Empirically, the method achieves significant improvements in sample efficiency, scalability, and performance stability across diverse challenging tasks—outperforming state-of-the-art Bayesian and heuristic exploration algorithms.
📝 Abstract
At the boundary between the known and the unknown, an agent inevitably confronts the dilemma of whether to explore or to exploit. Epistemic uncertainty reflects such boundaries, representing systematic uncertainty due to limited knowledge. In this paper, we propose a Bayesian reinforcement learning (RL) algorithm, $ exttt{EUBRL}$, which leverages epistemic guidance to achieve principled exploration. This guidance adaptively reduces per-step regret arising from estimation errors. We establish nearly minimax-optimal regret and sample complexity guarantees for a class of sufficiently expressive priors in infinite-horizon discounted MDPs. Empirically, we evaluate $ exttt{EUBRL}$ on tasks characterized by sparse rewards, long horizons, and stochasticity. Results demonstrate that $ exttt{EUBRL}$ achieves superior sample efficiency, scalability, and consistency.