Foundation Models as World Models: A Foundational Study in Text-Based GridWorlds

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low sample efficiency of reinforcement learning (RL) in high-interaction-cost settings. We propose a dual-path framework—“foundation world model + foundation agent”—that integrates large language models (LLMs) into RL. Methodologically: (1) we construct an LLM-driven foundation world model for high-fidelity environment simulation and forward prediction; (2) we design an LLM-augmented foundation agent that leverages LLMs’ reasoning capabilities to directly generate policies. In text-based GridWorld experiments, their synergy substantially improves sample efficiency: the foundation world model reduces trial-and-error overhead, while the foundation agent demonstrates strong generalization and policy quality in partially observable and stochastic environments. Our core contribution is the first systematic empirical validation that LLMs can serve as plug-and-play, high-efficiency components for both world modeling and policy generation—establishing a novel paradigm and empirical benchmark for LLM-RL integration.

Technology Category

Application Category

📝 Abstract
While reinforcement learning from scratch has shown impressive results in solving sequential decision-making tasks with efficient simulators, real-world applications with expensive interactions require more sample-efficient agents. Foundation models (FMs) are natural candidates to improve sample efficiency as they possess broad knowledge and reasoning capabilities, but it is yet unclear how to effectively integrate them into the reinforcement learning framework. In this paper, we anticipate and, most importantly, evaluate two promising strategies. First, we consider the use of foundation world models (FWMs) that exploit the prior knowledge of FMs to enable training and evaluating agents with simulated interactions. Second, we consider the use of foundation agents (FAs) that exploit the reasoning capabilities of FMs for decision-making. We evaluate both approaches empirically in a family of grid-world environments that are suitable for the current generation of large language models (LLMs). Our results suggest that improvements in LLMs already translate into better FWMs and FAs; that FAs based on current LLMs can already provide excellent policies for sufficiently simple environments; and that the coupling of FWMs and reinforcement learning agents is highly promising for more complex settings with partial observability and stochastic elements.
Problem

Research questions and friction points this paper is trying to address.

Improving sample efficiency in reinforcement learning with expensive interactions
Integrating foundation models' knowledge and reasoning into RL frameworks
Evaluating foundation world models and agents in grid-world environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Foundation world models simulate interactions
Foundation agents enable reasoning-based decision-making
Coupling models with reinforcement learning agents
🔎 Similar Papers
No similar papers found.
Remo Sasso
Remo Sasso
PhD student, Queen Mary University of London
Artificial IntelligenceMachine LearningReinforcement Learning
M
Michelangelo Conserva
School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom
D
Dominik Jeurissen
School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom
P
Paulo Rauber
School of Electronic Engineering and Computer Science, Queen Mary University of London, United Kingdom