🤖 AI Summary
This work addresses the challenge of credit assignment in long-horizon, multi-turn search agents caused by sparse trajectory-level rewards. To this end, the authors propose Branching Relative Policy Optimization (BranPO), a method that truncates trajectory tails at shared prefixes and constructs step-level contrastive supervision signals without relying on value functions by resampling contrasting suffixes. BranPO innovatively integrates difficulty-aware branch sampling with redundant-step masking to enable efficient and stable training without dense rewards. Experimental results demonstrate that BranPO significantly outperforms strong baselines across multiple question-answering benchmarks, achieving notably higher accuracy on long-horizon tasks while maintaining comparable overall training cost.
📝 Abstract
Agentic reinforcement learning has enabled large language models to perform complex multi-turn planning and tool use. However, learning in long-horizon settings remains challenging due to sparse, trajectory-level outcome rewards. While prior tree-based methods attempt to mitigate this issue, they often suffer from high variance and computational inefficiency. Through empirical analysis of search agents, We identify a common pattern: performance diverges mainly due to decisions near the tail. Motivated by this observation, we propose Branching Relative Policy Optimization (BranPO), a value-free method that provides step-level contrastive supervision without dense rewards. BranPO truncates trajectories near the tail and resamples alternative continuations to construct contrastive suffixes over shared prefixes, reducing credit ambiguity in long-horizon rollouts. To further boost efficiency and stabilize training, we introduce difficulty-aware branch sampling to adapt branching frequency across tasks, and redundant step masking to suppress uninformative actions. Extensive experiments on various question answering benchmarks demonstrate that BranPO consistently outperforms strong baselines, achieving significant accuracy gains on long-horizon tasks without increasing the overall training budget. Our code is available at \href{https://github.com/YubaoZhao/BranPO}{code}.