🤖 AI Summary
Large language models (LLMs) exhibit limited high-level strategic planning capabilities in complex multi-agent games, while conventional reinforcement learning (RL) approaches suffer from heavy reliance on extensive training data.
Method: This paper proposes a novel two-level tree search–driven LLM self-play learning paradigm that synergistically integrates Monte Carlo Tree Search (MCTS) with LLM-based reflective reasoning: strategic-level state evaluation and policy planning, and execution-level action generation and dialogue synthesis. End-to-end strategic skill acquisition is achieved via self-play and reinforcement feedback.
Contribution/Results: To our knowledge, this is the first framework to bridge the gap between LLMs and symbolic decision-making. It significantly improves win rates and strategic robustness on benchmark games—including GOPS and *The Resistance: Avalon*—outperforming both standard RL baselines and state-of-the-art LLM skill-learning methods.
📝 Abstract
In this paper, we propose a new method STRATEGIST that utilizes LLMs to acquire new skills for playing multi-agent games through a self-improvement process. Our method gathers quality feedback through self-play simulations with Monte Carlo tree search and LLM-based reflection, which can then be used to learn high-level strategic skills such as how to evaluate states that guide the low-level execution. We showcase how our method can be used in both action planning and dialogue generation in the context of games, achieving good performance on both tasks. Specifically, we demonstrate that our method can help train agents with better performance than both traditional reinforcement learning-based approaches and other LLM-based skill learning approaches in games including the Game of Pure Strategy (GOPS) and The Resistance: Avalon. STRATEGIST helps bridge the gap between foundation models and symbolic decision-making methods through its bi-level approach, leading to more robust decision-making.