π€ AI Summary
Large language models (LLMs) exhibit poor generalization and fragile exploration in out-of-distribution (OOD) dynamic environments, primarily due to their inability to effectively couple internal knowledge with environmental dynamics. To address this, we propose a *decoupled world model*, which separates environment modeling into two components: state representation learning and state-transition dynamics modeling. We further introduce a self-play supervised fine-tuning (SFT) cold-start mechanism to enable internalization of the world modelβi.e., grounding its predictions in environment interactions prior to reinforcement learning (RL). This pre-training strategy significantly accelerates subsequent RL convergence and improves policy robustness. Empirical evaluation on Sokoban, FrozenLake, and Sudoku shows substantial gains: Sokoban success rate rises from 25.6% to 59.8%; FrozenLake average reward increases from 22.1% to 70.9%. Our core contribution is a novel world-model internalization paradigm driven jointly by decoupled modeling and self-play SFT.
π Abstract
Large Language Models (LLMs) as agents often struggle in out-of-distribution (OOD) scenarios. Real-world environments are complex and dynamic, governed by task-specific rules and stochasticity, which makes it difficult for LLMs to ground their internal knowledge in those dynamics. Under such OOD conditions, vanilla RL training often fails to scale; we observe Pass@k--the probability that at least one of (k) sampled trajectories succeeds--drops markedly across training steps, indicating brittle exploration and limited generalization. Inspired by model-based reinforcement learning, we hypothesize that equipping LLM agents with an internal world model can better align reasoning with environmental dynamics and improve decision-making. We show how to encode this world model by decomposing it into two components: state representation and transition modeling. Building on this, we introduce SPA, a simple reinforcement learning framework that cold-starts the policy via a Self-Play supervised finetuning (SFT) stage to learn the world model by interacting with the environment, then uses it to simulate future states prior to policy optimization. This simple initialization outperforms the online world-modeling baseline and greatly boosts the RL-based agent training performance. Experiments across diverse environments like Sokoban, FrozenLake, and Sudoku show that our approach significantly improves performance. For example, SPA boosts the Sokoban success rate from 25.6% to 59.8% and raises the FrozenLake score from 22.1% to 70.9% for the Qwen2.5-1.5B-Instruct model.