🤖 AI Summary
Current training of large language models (LLMs) as multi-turn reinforcement learning (RL) agents lacks a systematic framework, with fragmented design choices and insufficient cross-task analysis.
Method: We introduce the first comprehensive design space for multi-turn LLM-based RL agents, conducting systematic empirical studies centered on three pillars—environment, reward, and policy—across TextWorld, ALFWorld, and SWE-Gym. We benchmark PPO, GRPO, and RLOO under dense and sparse reward settings, analyzing training dynamics and generalization.
Contribution/Results: Key findings include: (i) simple environments effectively predict generalization to complex tasks; (ii) an optimal balance exists between supervised fine-tuning (SFT) and RL; and (iii) environment complexity and reward density critically impact training stability and sample efficiency. Based on these insights, we propose a reusable, cross-pillar co-training paradigm and open-source our implementation to facilitate reproducibility and extension.
📝 Abstract
We study what actually works and what doesn't for training large language models as agents via multi-turn reinforcement learning. Despite rapid progress, existing frameworks and definitions are fragmented, and there is no systematic formulation or analysis of which design choices matter across tasks. We address this gap by first breaking down the design space into three inter-related pillars -- environment, reward, and policy -- and empirically derive a recipe for training LLM agents in situated textual domains. In particular, we test TextWorld and ALFWorld, popular domains for testing situated embodied reasoning, as well as SWE-Gym for more software engineering style tasks. (i) For the environment, we analyze the impacts of task complexity in terms of sizes of the state and action spaces as well as optimal solution length, finding that even simple environments within a domain can provide signal on how well an agent can generalize to more complex tasks. (ii) For the reward, we ablate relative reward sparsity, observing that while dense turn-level rewards accelerate training, performance and stability is highly dependent on the choice of RL algorithm. (iii) And for the agent's policy, we explore the interplay between reward sparsity and biased (PPO, GRPO) and unbiased (RLOO) policy gradient methods in addition to showing how to find the optimal Supervised Fine-tuning (SFT) to RL training ratio given a fixed budget. We distill these findings into a training recipe that guides co-design across the three pillars, facilitating research and practical efforts in multi-turn agentic RL. Code: https://github.com/pearls-lab/meow-tea-taro