🤖 AI Summary
To address the computational bottleneck in MuZero’s Monte Carlo Tree Search (MCTS) caused by sequential tree expansion, this paper introduces TransZero—the first model-based reinforcement learning algorithm enabling fully parallel expansion of entire subtrees. Methodologically, TransZero replaces recursive state simulation with a Transformer-based multi-step latent state transition model and introduces a Mean-Variance Constrained (MVC) evaluator that decouples node value and policy estimation from sequential visit counting, thereby supporting unordered, parallel evaluation. Empirically, on MiniGrid and LunarLander benchmarks, TransZero achieves up to 11× wall-clock speedup over MuZero while preserving identical sample efficiency. This work establishes, for the first time, a scalable, parallel planning paradigm for MCTS within deep model-based frameworks.
📝 Abstract
We present TransZero, a model-based reinforcement learning algorithm that removes the sequential bottleneck in Monte Carlo Tree Search (MCTS). Unlike MuZero, which constructs its search tree step by step using a recurrent dynamics model, TransZero employs a transformer-based network to generate multiple latent future states simultaneously. Combined with the Mean-Variance Constrained (MVC) evaluator that eliminates dependence on inherently sequential visitation counts, our approach enables the parallel expansion of entire subtrees during planning. Experiments in MiniGrid and LunarLander show that TransZero achieves up to an eleven-fold speedup in wall-clock time compared to MuZero while maintaining sample efficiency. These results demonstrate that parallel tree construction can substantially accelerate model-based reinforcement learning, bringing real-time decision-making in complex environments closer to practice. The code is publicly available on GitHub.