🤖 AI Summary
This work addresses the low sample efficiency of reinforcement learning (RL) in high-interaction-cost settings. We propose a dual-path framework—“foundation world model + foundation agent”—that integrates large language models (LLMs) into RL. Methodologically: (1) we construct an LLM-driven foundation world model for high-fidelity environment simulation and forward prediction; (2) we design an LLM-augmented foundation agent that leverages LLMs’ reasoning capabilities to directly generate policies. In text-based GridWorld experiments, their synergy substantially improves sample efficiency: the foundation world model reduces trial-and-error overhead, while the foundation agent demonstrates strong generalization and policy quality in partially observable and stochastic environments. Our core contribution is the first systematic empirical validation that LLMs can serve as plug-and-play, high-efficiency components for both world modeling and policy generation—establishing a novel paradigm and empirical benchmark for LLM-RL integration.
📝 Abstract
While reinforcement learning from scratch has shown impressive results in solving sequential decision-making tasks with efficient simulators, real-world applications with expensive interactions require more sample-efficient agents. Foundation models (FMs) are natural candidates to improve sample efficiency as they possess broad knowledge and reasoning capabilities, but it is yet unclear how to effectively integrate them into the reinforcement learning framework. In this paper, we anticipate and, most importantly, evaluate two promising strategies. First, we consider the use of foundation world models (FWMs) that exploit the prior knowledge of FMs to enable training and evaluating agents with simulated interactions. Second, we consider the use of foundation agents (FAs) that exploit the reasoning capabilities of FMs for decision-making. We evaluate both approaches empirically in a family of grid-world environments that are suitable for the current generation of large language models (LLMs). Our results suggest that improvements in LLMs already translate into better FWMs and FAs; that FAs based on current LLMs can already provide excellent policies for sufficiently simple environments; and that the coupling of FWMs and reinforcement learning agents is highly promising for more complex settings with partial observability and stochastic elements.