Reproducing AlphaZero on Tablut: Self-Play RL for an Asymmetric Board Game

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of applying AlphaZero to the asymmetric board game Tablut, where a single policy and value head struggles to reconcile conflicting evaluation objectives for the attacking and defending sides, leading to inefficient training. To overcome this, the authors propose a role-separated dual-head architecture that employs distinct policy and value heads for each side while sharing a common residual backbone. The approach is further enhanced with C4 data augmentation, an enlarged replay buffer, and periodic self-play against historical models to mitigate catastrophic forgetting inherent in asymmetric learning. After 100 rounds of self-play training, the resulting agent achieves a BayesElo rating of 1235, exhibits significantly reduced policy entropy, and maintains fewer remaining pieces, demonstrating more focused and effective decision-making in both offensive and defensive play.
πŸ“ Abstract
This work investigates the adaptation of the AlphaZero reinforcement learning algorithm to Tablut, an asymmetric historical board game featuring unequal piece counts and distinct player objectives (king capture versus king escape). While the original AlphaZero architecture successfully leverages a single policy and value head for symmetric games, applying it to asymmetric environments forces the network to learn two conflicting evaluation functions, which can hinder learning efficiency and performance. To address this, the core architecture is modified to use separate policy and value heads for each player role, while maintaining a shared residual trunk to learn common board features. During training, the asymmetric structure introduced training instabilities, notably catastrophic forgetting between the attacker and defender roles. These issues were mitigated by applying C4 data augmentation, increasing the replay buffer size, and having the model play 25 percent of training games against randomly sampled past checkpoints. Over 100 self-play iterations, the modified model demonstrated steady improvement, achieving a BayesElo rating of 1235 relative to a randomly initialized baseline. Training metrics also showed a significant decrease in policy entropy and average remaining pieces, reflecting increasingly focused and decisive play. Ultimately, the experiments confirm that AlphaZero's self-play framework can transfer to highly asymmetric games, provided that distinct policy/value heads and robust stabilization techniques are employed.
Problem

Research questions and friction points this paper is trying to address.

AlphaZero
asymmetric board game
Tablut
reinforcement learning
self-play
Innovation

Methods, ideas, or system contributions that make the work stand out.

asymmetric games
AlphaZero
separate policy/value heads
self-play reinforcement learning
training stabilization
πŸ”Ž Similar Papers
No similar papers found.