Internalizing World Models via Self-Play Finetuning for Agentic RL

πŸ“… 2025-10-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) exhibit poor generalization and fragile exploration in out-of-distribution (OOD) dynamic environments, primarily due to their inability to effectively couple internal knowledge with environmental dynamics. To address this, we propose a *decoupled world model*, which separates environment modeling into two components: state representation learning and state-transition dynamics modeling. We further introduce a self-play supervised fine-tuning (SFT) cold-start mechanism to enable internalization of the world modelβ€”i.e., grounding its predictions in environment interactions prior to reinforcement learning (RL). This pre-training strategy significantly accelerates subsequent RL convergence and improves policy robustness. Empirical evaluation on Sokoban, FrozenLake, and Sudoku shows substantial gains: Sokoban success rate rises from 25.6% to 59.8%; FrozenLake average reward increases from 22.1% to 70.9%. Our core contribution is a novel world-model internalization paradigm driven jointly by decoupled modeling and self-play SFT.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) as agents often struggle in out-of-distribution (OOD) scenarios. Real-world environments are complex and dynamic, governed by task-specific rules and stochasticity, which makes it difficult for LLMs to ground their internal knowledge in those dynamics. Under such OOD conditions, vanilla RL training often fails to scale; we observe Pass@k--the probability that at least one of (k) sampled trajectories succeeds--drops markedly across training steps, indicating brittle exploration and limited generalization. Inspired by model-based reinforcement learning, we hypothesize that equipping LLM agents with an internal world model can better align reasoning with environmental dynamics and improve decision-making. We show how to encode this world model by decomposing it into two components: state representation and transition modeling. Building on this, we introduce SPA, a simple reinforcement learning framework that cold-starts the policy via a Self-Play supervised finetuning (SFT) stage to learn the world model by interacting with the environment, then uses it to simulate future states prior to policy optimization. This simple initialization outperforms the online world-modeling baseline and greatly boosts the RL-based agent training performance. Experiments across diverse environments like Sokoban, FrozenLake, and Sudoku show that our approach significantly improves performance. For example, SPA boosts the Sokoban success rate from 25.6% to 59.8% and raises the FrozenLake score from 22.1% to 70.9% for the Qwen2.5-1.5B-Instruct model.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with out-of-distribution scenarios in dynamic environments
Vanilla RL training fails to scale with brittle exploration and limited generalization
Internal world models improve alignment of reasoning with environmental dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns world model via self-play supervised finetuning
Decomposes world model into state and transition components
Simulates future states before policy optimization
πŸ”Ž Similar Papers