Training Multi-Turn Search Agent via Contrastive Dynamic Branch Sampling

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of credit assignment in long-horizon, multi-turn search agents caused by sparse trajectory-level rewards. To this end, the authors propose Branching Relative Policy Optimization (BranPO), a method that truncates trajectory tails at shared prefixes and constructs step-level contrastive supervision signals without relying on value functions by resampling contrasting suffixes. BranPO innovatively integrates difficulty-aware branch sampling with redundant-step masking to enable efficient and stable training without dense rewards. Experimental results demonstrate that BranPO significantly outperforms strong baselines across multiple question-answering benchmarks, achieving notably higher accuracy on long-horizon tasks while maintaining comparable overall training cost.

Technology Category

Application Category

📝 Abstract
Agentic reinforcement learning has enabled large language models to perform complex multi-turn planning and tool use. However, learning in long-horizon settings remains challenging due to sparse, trajectory-level outcome rewards. While prior tree-based methods attempt to mitigate this issue, they often suffer from high variance and computational inefficiency. Through empirical analysis of search agents, We identify a common pattern: performance diverges mainly due to decisions near the tail. Motivated by this observation, we propose Branching Relative Policy Optimization (BranPO), a value-free method that provides step-level contrastive supervision without dense rewards. BranPO truncates trajectories near the tail and resamples alternative continuations to construct contrastive suffixes over shared prefixes, reducing credit ambiguity in long-horizon rollouts. To further boost efficiency and stabilize training, we introduce difficulty-aware branch sampling to adapt branching frequency across tasks, and redundant step masking to suppress uninformative actions. Extensive experiments on various question answering benchmarks demonstrate that BranPO consistently outperforms strong baselines, achieving significant accuracy gains on long-horizon tasks without increasing the overall training budget. Our code is available at \href{https://github.com/YubaoZhao/BranPO}{code}.
Problem

Research questions and friction points this paper is trying to address.

long-horizon reinforcement learning
sparse rewards
multi-turn search agent
credit assignment
trajectory-level rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive learning
multi-turn search agent
branch sampling
credit assignment
reinforcement learning
Y
Yubao Zhao
The Hong Kong University of Science and Technology (Guangzhou)
W
Weiquan Huang
The Hong Kong University of Science and Technology (Guangzhou)
S
Sudong Wang
The Hong Kong University of Science and Technology (Guangzhou)
Ruochen Zhao
Ruochen Zhao
Ph.D. in Artificial Intelligence, Nanyang Technological University. Currently at Apple
LLM AgentsTrustworthy LLMsLLM Evaluation
Chen Chen
Chen Chen
Nanyang Technological University
Knowledge graphnatural language processing
Y
Yao Shu
The Hong Kong University of Science and Technology (Guangzhou)
Chengwei Qin
Chengwei Qin
HKUST(GZ), NTU
LLMNLP