🤖 AI Summary
AlphaZero exhibits insufficient robustness under test-time environmental shifts, hindering real-world deployment. This work addresses its adaptability to dynamic test environments by proposing a lightweight architectural refinement: decoupling Monte Carlo Tree Search (MCTS) from the pretrained policy-value network, and integrating environment-aware adaptive budget allocation and perturbation-robust regularization during planning. Crucially, the method requires no retraining—only inference-time adaptation. Experiments demonstrate substantial improvements in policy stability and task success rate across diverse distributional shifts, including reward function modifications and stochastic state-transition perturbations; gains are especially pronounced under constrained planning budgets. The implementation is open-sourced to ensure reproducibility and facilitate practical extension.
📝 Abstract
The AlphaZero framework provides a standard way of combining Monte Carlo planning with prior knowledge provided by a previously trained policy-value neural network. AlphaZero usually assumes that the environment on which the neural network was trained will not change at test time, which constrains its applicability. In this paper, we analyze the problem of deploying AlphaZero agents in potentially changed test environments and demonstrate how the combination of simple modifications to the standard framework can significantly boost performance, even in settings with a low planning budget available. The code is publicly available on GitHub.