Information Gain-based Policy Optimization: A Simple and Effective Approach for Multi-Turn LLM Agents

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address advantage collapse and fine-grained credit assignment failure in large language model (LLM)-based agents under sparse-reward, multi-turn interaction settings, this paper proposes the Information Gain-based Strategy Optimization (IGSO) framework. Methodologically, IGSO leverages the LLM’s intrinsic probabilistic belief updates to quantify the marginal information gain of each interaction step, yielding a dense, differentiable, and temporally aligned intrinsic reward—without requiring external models or additional annotations. This intrinsic reward is jointly optimized with the terminal outcome reward to guide policy learning at the episode level, explicitly modeling the dynamics of information acquisition over time. Empirical results demonstrate that IGSO significantly outperforms strong baselines across multi-step reasoning and cross-domain tasks, achieving substantial improvements in both task accuracy and sample efficiency.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based agents are increasingly trained with reinforcement learning (RL) to enhance their ability to interact with external environments through tool use, particularly in search-based settings that require multi-turn reasoning and knowledge acquisition. However, existing approaches typically rely on outcome-based rewards that are only provided at the final answer. This reward sparsity becomes particularly problematic in multi-turn settings, where long trajectories exacerbate two critical issues: (i) advantage collapse, where all rollouts receive identical rewards and provide no useful learning signals, and (ii) lack of fine-grained credit assignment, where dependencies between turns are obscured, especially in long-horizon tasks. In this paper, we propose Information Gain-based Policy Optimization (IGPO), a simple yet effective RL framework that provides dense and intrinsic supervision for multi-turn agent training. IGPO models each interaction turn as an incremental process of acquiring information about the ground truth, and defines turn-level rewards as the marginal increase in the policy's probability of producing the correct answer. Unlike prior process-level reward approaches that depend on external reward models or costly Monte Carlo estimation, IGPO derives intrinsic rewards directly from the model's own belief updates. These intrinsic turn-level rewards are combined with outcome-level supervision to form dense reward trajectories. Extensive experiments on both in-domain and out-of-domain benchmarks demonstrate that IGPO consistently outperforms strong baselines in multi-turn scenarios, achieving higher accuracy and improved sample efficiency.
Problem

Research questions and friction points this paper is trying to address.

Addresses reward sparsity in multi-turn LLM agent training
Solves advantage collapse and credit assignment in long trajectories
Provides dense intrinsic supervision for multi-turn reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intrinsic rewards derived from model belief updates
Turn-level supervision based on information gain
Dense reward trajectories combining intrinsic and outcome signals
🔎 Similar Papers
No similar papers found.
G
Guoqing Wang
Ant Group
Sunhao Dai
Sunhao Dai
Renmin University of China
Recommender SystemsInformation RetrievalTrustworthyLarge Language Models
G
Guangze Ye
Individual Author
Z
Zeyu Gan
Renmin University of China
W
Wei Yao
Renmin University of China
Y
Yong Deng
Ant Group
X
Xiaofeng Wu
Ant Group
Z
Zhenzhe Ying
Ant Group