Epistemic Monte Carlo Tree Search

📅 2022-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AlphaZero/MuZero (A/MZ)-style algorithms suffer from limited exploration efficiency in sparse-reward tasks, as their Monte Carlo Tree Search (MCTS) fails to model epistemic uncertainty—arising from data scarcity—in the learned policy and value functions, despite its critical role in guiding deep exploration. Method: We introduce the first MCTS framework that theoretically formalizes and explicitly propagates epistemic uncertainty during search. Our approach couples Bayesian model ensemble approximations with the A/MZ architecture, enabling search-driven uncertainty estimation via an uncertainty-aware search mechanism. Results: On the subleq assembly programming task and the Deep Sea benchmark, our method significantly improves sample efficiency: it reduces training steps substantially on subleq and successfully solves high-difficulty Deep Sea variants where standard A/MZ completely fails, achieving faster convergence.
📝 Abstract
The AlphaZero/MuZero (A/MZ) family of algorithms has achieved remarkable success across various challenging domains by integrating Monte Carlo Tree Search (MCTS) with learned models. Learned models introduce epistemic uncertainty, which is caused by learning from limited data and is useful for exploration in sparse reward environments. MCTS does not account for the propagation of this uncertainty however. To address this, we introduce Epistemic MCTS (EMCTS): a theoretically motivated approach to account for the epistemic uncertainty in search and harness the search for deep exploration. In the challenging sparse-reward task of writing code in the Assembly language {sc subleq}, AZ paired with our method achieves significantly higher sample efficiency over baseline AZ. Search with EMCTS solves variations of the commonly used hard-exploration benchmark Deep Sea - which baseline A/MZ are practically unable to solve - much faster than an otherwise equivalent method that does not use search for uncertainty estimation, demonstrating significant benefits from search for epistemic uncertainty estimation.
Problem

Research questions and friction points this paper is trying to address.

Address epistemic uncertainty in Monte Carlo Tree Search
Improve exploration in sparse reward environments
Enhance sample efficiency in hard-exploration tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates epistemic uncertainty into MCTS
Enhances exploration in sparse reward environments
Improves sample efficiency in hard tasks
Yaniv Oren
Yaniv Oren
PhD candidate, Delft University of Technology
Reinforcement Learning
V
Villiam Vadocz
Delft University of Technology, 2628 CD Delft, The Netherlands
M
M. Spaan
Delft University of Technology, 2628 CD Delft, The Netherlands
W
Wendelin Bohmer
Delft University of Technology, 2628 CD Delft, The Netherlands