🤖 AI Summary
This work addresses “template collapse” in multi-turn large language model agents within reinforcement learning—where agents generate overly similar responses across diverse inputs—a failure mode poorly captured by conventional entropy-based metrics. The study formally defines this phenomenon and proposes decomposing reasoning quality into intra-input diversity (measured by entropy) and inter-input distinguishability (quantified via mutual information). Introducing mutual information as a more reliable evaluation metric, the authors develop a signal-to-noise ratio (SNR) interpretability framework and design a lightweight SNR-aware filtering strategy to select high-signal prompts that enhance input dependence. Experiments across planning, mathematical reasoning, web navigation, and code execution tasks demonstrate significant performance gains, with mutual information exhibiting substantially stronger correlation to task performance than traditional entropy measures.
📝 Abstract
RL training of multi-turn LLM agents is inherently unstable, and reasoning quality directly determines task performance. Entropy is widely used to track reasoning stability. However, entropy only measures diversity within the same input, and cannot tell whether reasoning actually responds to different inputs. In RAGEN-2, we find that even with stable entropy, models can rely on fixed templates that look diverse but are input-agnostic. We call this template collapse, a failure mode invisible to entropy and all existing metrics. To diagnose this failure, we decompose reasoning quality into within-input diversity (Entropy) and cross-input distinguishability (Mutual Information, MI), and introduce a family of mutual information proxies for online diagnosis. Across diverse tasks, mutual information correlates with final performance much more strongly than entropy, making it a more reliable proxy for reasoning quality. We further explain template collapse with a signal-to-noise ratio (SNR) mechanism. Low reward variance weakens task gradients, letting regularization terms dominate and erase cross-input reasoning differences. To address this, we propose SNR-Aware Filtering to select high-signal prompts per iteration using reward variance as a lightweight proxy. Across planning, math reasoning, web navigation, and code execution, the method consistently improves both input dependence and task performance.