🤖 AI Summary
To address the limitations of large language models (LLMs) in generating illegal moves and exhibiting shallow strategic reasoning in board and card games, this paper proposes the “Code-based World Model” framework. It leverages LLMs to automatically compile natural-language game rules into executable Python code, enabling precise state transition, legal action enumeration, and terminal-state detection. The framework further integrates Monte Carlo Tree Search (MCTS), jointly generating heuristic value functions and latent-state inference functions to uniformly support both perfect- and imperfect-information games. Its core innovation lies in deeply coupling semantic understanding with classical planning algorithms—yielding a general-purpose, verifiable, executable, and generalizable game solver. Evaluated on ten benchmark games—including five newly designed partially observable ones—the framework achieves performance at or above that of Gemini 2.5 Pro on nine.
📝 Abstract
Large Language Models (LLMs) reasoning abilities are increasingly being applied to classical board and card games, but the dominant approach -- involving prompting for direct move generation -- has significant drawbacks. It relies on the model's implicit fragile pattern-matching capabilities, leading to frequent illegal moves and strategically shallow play. Here we introduce an alternative approach: We use the LLM to translate natural language rules and game trajectories into a formal, executable world model represented as Python code. This generated model -- comprising functions for state transition, legal move enumeration, and termination checks -- serves as a verifiable simulation engine for high-performance planning algorithms like Monte Carlo tree search (MCTS). In addition, we prompt the LLM to generate heuristic value functions (to make MCTS more efficient), and inference functions (to estimate hidden states in imperfect information games). Our method offers three distinct advantages compared to directly using the LLM as a policy: (1) Verifiability: The generated CWM serves as a formal specification of the game's rules, allowing planners to algorithmically enumerate valid actions and avoid illegal moves, contingent on the correctness of the synthesized model; (2) Strategic Depth: We combine LLM semantic understanding with the deep search power of classical planners; and (3) Generalization: We direct the LLM to focus on the meta-task of data-to-code translation, enabling it to adapt to new games more easily. We evaluate our agent on 10 different games, of which 4 are novel and created for this paper. 5 of the games are fully observed (perfect information), and 5 are partially observed (imperfect information). We find that our method outperforms or matches Gemini 2.5 Pro in 9 out of the 10 considered games.