🤖 AI Summary
This study addresses the limited cross-domain generalization of large language model (LLM) agents, which stems from narrow reinforcement learning training environments. Through systematic analysis, the authors identify state information richness and planning complexity as critical factors governing generalization performance—more so than domain realism or textual similarity. To enhance robustness, they propose enriching state representations with low-cost, task-irrelevant distractor features and incorporating randomization strategies. Experiments across multiple simulated environments—including Sokoban, SciWorld, and ALFWorld—demonstrate that increasing state information richness alone substantially improves cross-domain generalization. Furthermore, step-by-step reasoning proves essential for sustaining this capability, while supervised fine-tuning (SFT) exhibits a dual effect, sometimes aiding and sometimes hindering generalization depending on context.
📝 Abstract
Generalist LLM agents are often post-trained on a narrow set of environments but deployed across far broader, unseen domains. In this work, we investigate the challenge of agentic post-training when the eventual test domains are unknown. Specifically, we analyze which properties of reinforcement learning (RL) environments and modeling choices have the greatest influence on out-of-domain performance. First, we identify two environment axes that strongly correlate with cross-domain generalization: (i) state information richness, i.e., the amount of information for the agent to process from the state, and (ii) planning complexity, estimated via goal reachability and trajectory length under a base policy. Notably, domain realism and text-level similarity are not the primary factors; for instance, the simple grid-world domain Sokoban leads to even stronger generalization in SciWorld than the more realistic ALFWorld. Motivated by these findings, we further show that increasing state information richness alone can already effectively improve cross-domain robustness. We propose a randomization technique, which is low-overhead and broadly applicable: add small amounts of distractive goal-irrelevant features to the state to make it richer without altering the task. Beyond environment-side properties, we also examine several modeling choices: (a) SFT warmup or mid-training helps prevent catastrophic forgetting during RL but undermines generalization to domains that are not included in the mid-training datamix; and (b) turning on step-by-step thinking during RL, while not always improving in-domain performance, plays a crucial role in preserving generalization.