🤖 AI Summary
This paper addresses the lack of theoretical guarantees for Monte Carlo Tree Search (MCTS) under stochastic state transitions—i.e., in random environments. We propose an extension of MCTS tailored to the stochastic multi-armed bandit framework, building upon the modeling approach of Shah et al. (2020) and integrating stochastic process analysis with Upper Confidence Bound (UCB) principles. Crucially, we establish, for the first time, a tight polynomial regret upper bound for the UCB-based node selection rule in stochastic MCTS. This theoretical result overcomes the fundamental limitation of classical MCTS, which relies on deterministic transition assumptions. As a consequence, our analysis provides stronger robustness and convergence guarantees for MCTS in real-world applications involving randomness and partial observability—such as autonomous decision-making and financial modeling.
📝 Abstract
Monte Carlo Tree Search (MCTS) has proven effective in solving decision-making problems in perfect information settings. However, its application to stochastic and imperfect information domains remains limited. This paper extends the theoretical framework of MCTS to stochastic domains by addressing non-deterministic state transitions, where actions lead to probabilistic outcomes. Specifically, building on the work of Shah et al. (2020), we derive polynomial regret concentration bounds for the Upper Confidence Bound algorithm in multi-armed bandit problems with stochastic transitions, offering improved theoretical guarantees. Our primary contribution is proving that these bounds also apply to non-deterministic environments, ensuring robust performance in stochastic settings. This broadens the applicability of MCTS to real-world decision-making problems with probabilistic outcomes, such as in autonomous systems and financial decision-making.