InfoFlow: Reinforcing Search Agent Via Reward Density Optimization

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In deep search tasks, sparse global rewards hinder reinforcement learning with verifiable rewards (RLVR), leading to high exploration costs and low learning efficiency. Method: This paper proposes a reward density optimization framework that decomposes sparse global rewards into dense subtask process rewards. It introduces failure-guided prompting and a dual-agent collaboration mechanism—where a “Researcher” agent explores while a “Refiner” agent corrects errors—and incorporates search history compression and failure trajectory correction strategies. Contribution/Results: The framework is the first to systematically increase reward density per unit exploration cost. It achieves significant performance gains over strong baselines across multiple agent-based search benchmarks. Notably, lightweight large language models (LLMs) equipped with this framework match the search performance of state-of-the-art proprietary large models, demonstrating its scalability and efficiency.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) is a promising approach for enhancing agentic deep search. However, its application is often hindered by low extbf{Reward Density} in deep search scenarios, where agents expend significant exploratory costs for infrequent and often null final rewards. In this paper, we formalize this challenge as the extbf{Reward Density Optimization} problem, which aims to improve the reward obtained per unit of exploration cost. This paper introduce extbf{InfoFlow}, a systematic framework that tackles this problem from three aspects. 1) extbf{Subproblem decomposition}: breaking down long-range tasks to assign process rewards, thereby providing denser learning signals. 2) extbf{Failure-guided hints}: injecting corrective guidance into stalled trajectories to increase the probability of successful outcomes. 3) extbf{Dual-agent refinement}: employing a dual-agent architecture to offload the cognitive burden of deep exploration. A refiner agent synthesizes the search history, which effectively compresses the researcher's perceived trajectory, thereby reducing exploration cost and increasing the overall reward density. We evaluate InfoFlow on multiple agentic search benchmarks, where it significantly outperforms strong baselines, enabling lightweight LLMs to achieve performance comparable to advanced proprietary LLMs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing reward density in deep search reinforcement learning
Breaking down long-range tasks to provide process rewards
Using dual-agent architecture to reduce exploration costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decomposing long tasks for process rewards
Injecting corrective hints into stalled trajectories
Employing dual-agent architecture to compress history
🔎 Similar Papers
No similar papers found.
Kun Luo
Kun Luo
Zhejiang University
Hongjin Qian
Hongjin Qian
Peking University
LLMIRNLP
Z
Zheng Liu
Beijing Academy of Artificial Intelligence
Ziyi Xia
Ziyi Xia
University of British Columbia
Computer GraphicsVRMachine Learning
Shitao Xiao
Shitao Xiao
BUPT
Siqi Bao
Siqi Bao
Baidu
Natural Language ProcessingMedical Image Analysis
J
Jun Zhao
Institute of Automation, Chinese Academy of Sciences
K
Kang Liu
Institute of Automation, Chinese Academy of Sciences